Perturbation-Based Explanations
Learn about a perturbation-based algorithm that determines the importance of pixels in a model's prediction.
What are perturbation-based explanations?
Perturbation-based explanations are a class of methods that aim to explain the predictions of deep learning models by perturbing the input data and observing how the model’s output changes. These methods perturb the input to the network and consider the effect of the perturbation on prediction.
In simpler terms, perturbation-based explanations are a way to understand how a deep learning model makes predictions by changing small parts of its input and observing how its output changes. This can help us understand why a model makes certain predictions and what features it’s using to make those predictions.
Randomized input sampling for explanation
Randomized Input Sampling for Explanation (RISE) is a popular perturbation-based algorithm that determines the importance of pixels in a model’s prediction. However, instead of using gradients for computing pixel importance, RISE estimates their importance via an empirical method.
RISE computes the importance of an image pixel by reducing its intensity to zero and measuring how much it affects the model prediction. The rationale behind this approach is that if an image pixel or region is important, then its removal should significantly affect the model’s prediction.
Given a three-channel image
The mask
In other words,
Get hands-on with 1400+ tech skills courses.