Summary

Go over a summary of what we have learned in this chapter.

In this chapter, we learned about the following concepts.

Regularization

  • Regularization plays an important role in generalized model training and avoids the issues related to multicollinearity in data. It prevents overfitting by imposing a penalty on model coefficients.

  • The two most common regularization methods are ridge and lasso (L2 and L1, respectively). Another one is elastic net, which is a mixture of ridge and lasso regularization.

Penalties

  • The ridge penalty term will be zero if α\alpha is zero, and the loss function will be simply the least square function, RSS. On the other hand, the RSS component will have a much smaller effect on the loss than the regularization term if α\alpha is very big.

  • Lasso can result in sparse models with few coefficients because some can become zero and be eliminated from the model. This makes it easier to interpret the lasso model than the ridge model. Larger penalties result in coefficient values closer to zero, which is ideal for producing simpler models. Lasso performs both variable selection and regularization to enhance the model’s prediction accuracy and interpretability.

  • In the elastic net, the effect of the ridge vs. the lasso is balanced by the two alpha parameters.

Get hands-on with 1200+ tech skills courses.