Developing Apache Hadoop 2.0 Solutions for Data Analysts
Tuesday, 19 November 2013 at 09:30 - Friday, 22 November 2013 at 17:30 (GMT)
London, United Kingdom
|Course - Developing Apache Hadoop 2.0 Solutions for Data Analysts|
At the completion of the course students will be able to:
- Explain Hadoop 2.0 and YARN
- Explain how HDFS Federation works in Hadoop 2.0
- Explain the various tools and frameworks in the Hadoop 2.0 ecosystem
- Explain the architecture of the Hadoop Distributed File System (HDFS)
- Use the Hadoop client to input data into HDFS
- Explain the architecture of MapReduce
- Run a MapReduce job on Hadoop
- Use Sqoop to transfer data between Hadoop and a relational database
- Write a Pig script to explore and transform data in HDFS
- Define advanced Pig relations
- Use Pig to apply structure to unstructured Big Data
- Invoke a Pig User-Defined Function
- Write a Hive query
- Understand how Hive tables are defined and implemented
- Use Hive to run SQL-like queries to perform data analysis
- Perform a multi-table select in Hive
- Design a proper schema for Hive
- Explain the uses and purpose of HCatalog
- Use HCatalog with Pig and Hive
- Use Pig to organize and analyze Big Data
- Define a workflow using Oozie
Students will work through the following lab exercises using the Hortonworks Data Platform for Windows:
- Using HDFS commands
- Using Sqoop to transfer data between HDFS and a RDBMS
- Running a MapReduce job
- Monitoring a MapReduce job
- Exploring data with Pig
- Splitting a dataset with Pig
- Joining datasets with Pig
- Using Pig to prepare data for Hive
- Understanding Hive tables
- Analyzing Big Data with Hive
- Understanding MapReduce in Hive
- Joining datasets with Hive
- A Multi-table select with Hive
- Streaming data with Hive and Python
- Using HCatalog with Pig
- Computing Quantiles with Pig
- Computing ngrams with Hive
- Defining Workflow with Oozie
Target Audience / Prerequisites
Students should be familiar with SQL and have a minimal understanding of programming principles. No prior Hadoop knowledge is required.
Data Analysts, BI Analysts, BI Developers, SAS Developers and other types of analysts who need to answer questions and analyze Big Data stored in a Hadoop cluster.
All necessary equipment and infrastructure required to perform lab exercises are provided.
Big Data Partnership will provide a light breakfast and lunch for each day of the class. Unlimited teas, coffees & soft drinks provided.
Cancellation & Reschedule Policy
You must provide a written notice to Big Data Partnership at least 2 weeks' prior to the start of the class if you cannot attend this class. Big Data Partnership will transfer your registration to a future class of equal or lesser value.
Students who fail to cancel within 2 weeks' and/or do not attend the class, will not receive a refund and will be charged the full amount.
Big Data Partnership can cancel or reschedule at any time at our discretion. In the event that the class is cancelled or rescheduled, we will work with you to apply your registration to another date or refund your fee in full. Big Data Partnership is not responsible for non-refundable travel or other expenses incurrred by the student.
If you have any questions concerning this class, please do not hesitate to contact firstname.lastname@example.org.
When & Where
Big Data Partnership
Big Data Partnership is the leading UK-based big data service provider in data science, data engineering and certified training, helping unlock value from complex data, driving exciting results for our customers.
For more information, contact us:
Tel: +44 (0)20 7205 2550