A critical concept in machine learning

The bias-variance tradeoff is widely considered a critical concept in machine learning. The most valuable machine learning models can produce accurate predictions on new, unseen data. As a machine learning practitioner, crafting valuable models requires carefully balancing the bias-variance tradeoff.

This lesson builds on the knowledge of underfitting and overfitting to understand the bias-variance tradeoff. Tuning models for the bias-variance tradeoff using the cross-validation technique will be covered later in the course.

The dart throwing analogy

An intuitive way to learn the bias-variance tradeoff is by using the analogy of throwing darts at a dartboard. To make this analogy relatable, consider a hypothetical dart enthusiast named Bob.

Bob is an avid participant in dart throwing competitions and has decided to improve his skill. To allow himself ample time to practice, Bob purchases a dartboard at home.

High bias, low variance

Consider a scenario where Bob makes a mistake when setting up his dartboard at home—i.e., he places the dartboard higher on the wall than what is used in regulation competitions. Now imagine Bob practices only at home, becoming quite good at dart throwing.

When the day of Bob’s next competition arrives, he is quite disappointed by his performance. The following image shows where his darts landed:

Get hands-on with 1200+ tech skills courses.