Layer-wise Relevance Propagation

Learn about Layer-wise Relevance Propagation, which explains predictions by assigning relevance scores to input features.

Mathematical model for LRP

Layer-wise Relevance Propagation (LRP) explains the output of a neural network by attributing relevance scores to each input feature. Like saliency maps, these scores indicate how much each input feature contributes to the network output.

LRP propagates the prediction score backward in the neural network by using a set of purposely designed propagation rules. These rules backpropagate the relevance of each neuron from the target layer back to the input layer, following a conservation property.

Note: The conservation property ensures that the sum of relevances of all neurons in a particular layer equals the prediction score of the network.

In other words, given the relevance score Ri(l)R^{(l)}_i of a neuron ii in the layer ll (in total L+1L+1 layers including the input layer), LRP ensures:

Get hands-on with 1200+ tech skills courses.