LIME
Learn about local interpretable model-agnostic explanations for neural networks, also known as LIME.
We'll cover the following
Local interpretable model-agnostic explanations
Local interpretable model-agnostic explanations (LIME) is a technique that can explain the predictions of any classifier or regressor by approximating it locally with an interpretable model. It modifies a single data sample by tweaking the feature values and observes the resulting impact on the output. The output of LIME is a saliency map representing the contribution of each feature to the prediction.
Given a data point
Get hands-on with 1400+ tech skills courses.