Guided Backpropagation

Learn to implement guided backpropagation-based saliency maps.

Deeper dive into vanilla gradient

Rectified linear units (ReLUs) are one of the most common and widely used activation functions in deep neural networks. A ReLU activation unit dismisses all negative values and only allows positive values to pass. In other words,

ReLU(x)={x,if x>00,otherwise\text{ReLU}(x) = \begin{cases} x, & \text{if}\ x>0 \\ 0, & \text{otherwise} \end{cases} ...

 ReLU(x)x={1,if x>00,otherwise\frac{\partial \ \text{ReLU}(x)}{\partial x} = \begin{cases} 1, & \text{if}\ x>0 \\ 0, & \text{otherwise} \end{cases} ...