Explainability Methods

Learn about explainable methods like SHAP and LIME.

There are three broad categories of explainable AI: self-explainable, global, and local.

Self-explainable models

Self-explainable models consist of those algorithms that are inherently explainable (i.e., linear regressions, decision trees, etc.) just by looking at the architecture or formulae involved. Global and local models are more involved and represent only a fraction of the interpretability that self-explainable models provide.

Global explanations

Global explanations are other models constructed to approximate the model in question. They treat the model as a black box that can be repeatedly queried to construct another, more interpretable model that explains the black box.

Shapley Additive Explanations (SHAP)

SHapley Additive exPlanations (SHAP) is one example of global explanations. SHAP is a game theory approach that treats the outputs of the black box as a game.

In this game, each variable used in the algorithm is a player. Given a particular regression output, each variable can decide to “play” or “not play.” Those that play get a payoff that’s proportionate to how predictive they are in combination. If all variables play, for example, the total payoff (the target variable) will be low (because some features are less predictive in combination than others) and it will be divided among each of the participating players. Optimally, variables that play together are predictive in combination and can enjoy a large payoff distributed among fewer players.

The advantage of this method is that it can show the importance of each variable when their values are high and low. The downside is that this is inherently not interpretable, i.e., impact on model output is not the same as identifying how the output changes with subtle input changes.

Surrogate models

Surrogate models are a simpler way of arriving at global explanations. They are very similar to the distillation models seen earlier in the course in that they’re student models constructed on the input data as the XXYY. For example, it’s possible to simply construct a linear regression on the outputs of a random forest, and then use the linear regression to gain visibility into how the random forest prioritizes its features.

Let’s construct a quick surrogate model as an example.

Get hands-on with 1400+ tech skills courses.