Regression Model Explanation
Gain insights into machine learning models, including the importance of features, residual analysis, and Shapley values.
We'll cover the following
Knowing how to build explainable machine learning models is very valuable. It helps us understand how the model makes decisions. It promotes transparency and accountability, builds trust in machine learning models, and helps us identify and mitigate potential biases or errors in the model.
In this lesson, we’ll explore the airline fare predictor model using the explainable modules available in the H2O package to gain more insights. It provides us with various methods that explain machine learning, like:
Variable importance
Partial dependence plots
Residual analysis
Shapley values
Feature interactions
Individual conditional expectation (ICE) plot
Individual row prediction explanations
Model performance explanations (confusion matrix, model correlation heatmap)
Feature importance
Feature (variable) importance measures how much an input feature has contributed to the overall accuracy of a predictive model. It’s determined by evaluating the impact of including or excluding a particular feature on the model’s performance. Features contributing positively to the model’s accuracy are given higher importance and vice versa.
The _varimp_plot()
method in H2O allows us to visualize the relative importance of each feature in the model, making it easier to identify the most important variables:
Get hands-on with 1400+ tech skills courses.