X-GradCAM

Learn about an enhanced version of GradCAM that satisfies two explainability axioms—sensitivity and conservation.

We'll cover the following

Axiom-based GradCAM

Axiom-based GradCAM or X-GradCAM is an enhanced version of GradCAM that satisfies two explainability axioms—sensitivity and conservation.

Let's assume that FlRh×w\mathcal{F}_l \in \R^{h \times w} is the lthl^{th} feature map obtained from the penultimate convolutional layer (hh and ww are the height and width of the feature map) and Fl(i,j)\mathcal{F}_l(i,j) represents the activation of the (i,j)th(i,j)^{th} neuron in the lthl^{th} feature map (total LL feature maps in F\mathcal{F}).

The X-GradCAM algorithm optimizes for linear combination weights αl\alpha_l such that the resulting CAM C x-grad(i,j)=lαlFl(i,j)C^{\ \text{x-grad}}(i,j) = \sum_{l} \alpha_l \cdot \mathcal{F}_l(i,j) satisfies both axioms as follows.

Sensitivity is a property in which each feature’s contribution to the explanation should equal the change in the network prediction caused by removing that feature from the input. In other words, given the network prediction fk(X)f^{k^*}(X),

Get hands-on with 1400+ tech skills courses.