Layer-wise Relevance Propagation
Learn about Layer-wise Relevance Propagation, which explains predictions by assigning relevance scores to input features.
We'll cover the following
Mathematical model for LRP
Layer-wise Relevance Propagation (LRP) explains the output of a neural network by attributing relevance scores to each input feature. Like saliency maps, these scores indicate how much each input feature contributes to the network output.
LRP propagates the prediction score backward in the neural network by using a set of purposely designed propagation rules. These rules backpropagate the relevance of each neuron from the target layer back to the input layer, following a conservation property.
Note: The conservation property ensures that the sum of relevances of all neurons in a particular layer equals the prediction score of the network.
In other words, given the relevance score
Get hands-on with 1400+ tech skills courses.