Ensemble Learning Part 1
In this lesson, you’ll learn Ensemble Learning which combines several models to improve the predictions as compared to a single model.
Ensemble Learning
Ensemble Learning involves methods that combine the decisions of several models to improve the overall performance of the predictive models. Kaggle competition winners commonly use Ensem
Family of Ensemble Models
Voting Classifier
In the voting classifiers ensemble, multiple models such as Logistic Regression and support vector machines predict the label for an instance. and this prediction is taken as a vote for the label. The label predicted by the majority of the classifiers has the maximum votes and is used as the final label.
Hard Voting
In hard voting, the final label is decided based on the label predicted by the majority of the models. Suppose we have three models and the following predictions.
- Model 1 predicts Class 1
- Model 2 predicts Class 1
- Model 3 predicts Class 2
Then hard voting would select the final class label to be 1. In case of a tie, hard voting selects the label based on the ascending vote order. If we have 2 models
- Model 1 predicts Class 2
- Model 2 predicts Class 1
Then, Hard Voting will assign the Class Label to be 1.
Soft Voting
In soft voting, we follow the steps below.
- We assign a weight to each of the models.
- We take the predicted class probabilities from each model.
- Then, we multiply the predicted class probabilities with the model weight, and take an average of all the products.
- The final class label is chosen with the highest average probability. Let’s assume we have four three-class classifiers where we assign weights to each classifier in the following manner
...