Introduction to Tree-Based Methods
This lesson will provide an overview of tree-based methods like decision trees, random forest, and gradient boosting.
We'll cover the following
Quick Overview
Tree-based learning algorithms, also known as
Tree-based methods (decision trees, bagging, random forests, boosting, etc.) are highly effective for supervised learning. This is partly due to their high accuracy and versatility, as they can predict discrete and continuous outcomes.
We will go over three tree-based algorithms:
- Decision Trees
- Random Forests
- Gradient Boosting
Decision Trees
Decision Trees (DT) create decision structures to interpret patterns by splitting data into groups. They use variables that best split the data into homogenous or numerically relevant groups based on entropy (a measure of variance in the data among different classes). The primary appeal of decision trees is that they can be displayed graphically as a tree-like graph, and they are easy to explain to non-experts.
Get hands-on with 1400+ tech skills courses.