Apache Hadoop 2.0: Data Analysis with the Hortonworks Data Platform using Pig and Hive
Monday, 7 April 2014 at 09:00 - Thursday, 10 April 2014 at 17:00 (BST)
London, United Kingdom
|Apache Hadoop 2.0 : Data Analysis with the Hortonworks Data Platform using Pig and Hive|
This 4-day hands-on training course teaches you how to develop applications and analyze Big Data stored in Apache Hadoop 2.0 using Pig and Hive. You will learn the details of Hadoop 2.0, YARN, the Hadoop Distributed File System (HDFS), an overview of MapReduce, and a deep dive into using Pig and Hive to perform data analytics on Big Data.
Other topics covered include data ingestion using Sqoop and Flume, and defining workflow using Oozie.
Note: this course was formerly named: Developing Apache Hadoop 2.0 Solutions for Data Analysts
At the completion of the course you will be able to:
- Explain Hadoop 2.0 and YARN
- Explain use cases for Hadoop
- Explain how HDFS Federation works in Hadoop 2.0
- Explain the various tools and frameworks in the Hadoop 2.0 ecosystem
- Explain the architecture of the Hadoop Distributed File System (HDFS)
- Use the Hadoop client to input data into HDFS
- Use Sqoop to transfer data between Hadoop and a relational database
- Explain the architecture of MapReduce
- Explain the architecture of YARN
- Run a MapReduce job on YARN
- Write a Pig script to explore and transform data in HDFS
- Define advanced Pig relations
- Use Pig to apply structure to unstructured Big Data
- Invoke a Pig User-Defined Function
- Use Pig to organize and analyze Big Data
- Understand how Hive tables are defined and implemented
- Use the new Hive windowing functions
- Explain and use the various Hive file formats
- Create and populate a Hive table that uses the new ORC file format
- Use Hive to run SQL-like queries to perform data analysis
- Use Hive to join datasets using a variety of techniques, including Map-side joins and Sort-Merge-Bucket joins
- Write efficient Hive queries
- Create ngrams and context ngrams using Hive
- Perform data analytics like quantiles and page rank on Big Data using the DataFu Pig library
- Explain the uses and purpose of HCatalog
- Use HCatalog with Pig and Hive
- Define a workflow using Oozie
- Schedule a recurring workflow using the Oozie Coordinator
|Day 1||Day 2|
|Day 3||Day 4|
Target Audience / Prerequisites
Students should be familiar with SQL and have a minimal understanding of programming principles. No prior Hadoop knowledge is required.
Data Analysts, BI Analysts, BI Developers, SAS Developers and other types of analysts who need to answer questions and analyze Big Data stored in a Hadoop cluster.
All necessary equipment and infrastructure required to perform lab exercises are provided.
Unlimited teas, coffees & soft drinks provided.
Cancellation & Reschedule Policy
You must provide a written notice to Big Data Partnership at least 2 weeks' prior to the start of the class if you cannot attend this class. Big Data Partnership will transfer your registration to a future class of equal or lesser value.
Students who fail to cancel within 2 weeks' and/or do not attend the class, will not receive a refund and will be charged the full amount.
Big Data Partnership can cancel or reschedule at any time at our discretion. In the event that the class is cancelled or rescheduled, we will work with you to apply your registration to another date or refund your fee in full. Big Data Partnership is not responsible for non-refundable travel or other expenses incurrred by the student.
If you have any questions concerning this class, please do not hesitate to contact email@example.com.
When & Where
Big Data Partnership
Big Data Partnership is the leading European-based big data service provider.
Our team has deep expertise across a wide range of big data technologies and data science techniques.
Our recent projects have included:
- the Apache Hadoop ecosystem,
- Apache Spark,
- Apache Cassandra
And a range of other NoSQL databases & search technologies.
Big Data Partnership helps organisations across all industries become more data-driven by reducing costs and grasping new big-data opportunities, rapidly and at low risk.
We help you Discover why and how to become data driven; we work with you to Develop and prove the value of this approach; we Deliver cost effective solutions which exploit faster and more scalable technology. We reduce risk by Training your staff in the necessary new skills and by providing Support.
For more information, visit http://www.bigdatapartnership.com.