Types of Ensemble Learning

Discover the concept of majority voting and explore the techniques of bagging and boosting.

Majority voting

Majority voting is a simple and widely used technique in ensemble learning that combines the predictions of multiple individual models (often called base models or weak learners) to make a final prediction. The idea behind majority voting is straightforward: each model in the ensemble makes a prediction, and the final prediction is determined by a majority vote among these individual predictions.

Consider an example of binary classification where we aim to determine whether a test data point belongs to class 00 or class 11. Additionally, suppose we have three trained models (weak learners) that provide predictions for the class of that test point. Our ensemble model makes its prediction by considering the majority output, meaning if most of the models predict 00, our ensemble model will also predict 00. This process is illustrated in the accompanying animation.

Get hands-on with 1400+ tech skills courses.