Two of the most powerful ideas in supervised Machine Learning are Bagging and Boosting. Their most famous application are in Random Forests and Gradient Boosting Machines. These two techniques provide world-class performance in predictive modeling. In this one day course, you will start by learning Decision Tree theory and practice and will naturally progress to learn about Bagging and Boosting from a practical perspective. By the end of the day, you will have the skills and confidence to be able to fit Random Forests and Gradient Boosting and their derivatives models, in practice.
WHAT WILL I LEARN?
At the end of this course, you will:
• Understand the various types of Decision Trees available, their strengths and weaknesses
• Be aware of the practical issues in using Decision Trees and how to overcome them
• Know when Decision Trees are useful (and when they are not)
• Understand Bagging and why it works and how it generalises Decision Trees to create the power full technique of Random Forests.
• Understand the concept of Weak Learners and one of the earliest and most famous boosting techniques – Adaboost.
• Understand Gradient Boosting Machines and the role that Decision Trees play in this technique.
• Be aware of different approaches to Variable Importance and Partial Dependence plots and their strength and weaknesses
• Be aware of the different packages available in R for Random Forests and Gradient Boosting and know which to use in your circumstance
WHO IS IT FOR?
Some prior knowledge of R and the high level Machine Learning concepts is necessary. The level required can be gained from our Introduction to R for Machine Learning and Machine Learning Concepts course.