LIME

Learn about local interpretable model-agnostic explanations for neural networks, also known as LIME.

Local interpretable model-agnostic explanations

Local interpretable model-agnostic explanations (LIME) is a technique that can explain the predictions of any classifier or regressor by approximating it locally with an interpretable model. It modifies a single data sample by tweaking the feature values and observes the resulting impact on the output. The output of LIME is a saliency map representing the contribution of each feature to the prediction.

Given a data point XX, a neural network f(.)f(.) can be written as a linear function in the local neighborhood of XX. In other words:

Get hands-on with 1400+ tech skills courses.