BackPropagation
Learn how we can update the weights using backpropagation in neural networks.
We'll cover the following...
What is backpropagation?
Once you have computed the feedforward pass for the given data, train the network using a method called backpropagation. It is a method that allows the neural network to learn from mistakes. This will allow gradient descent to update all weights in the neural network (by taking gradients for all weights).
We have learned that for a given set of input data, the output of a neural network is calculated with the weights of the edges that connect the nodes in the network. Therefore, we should learn the optimal values of weights that minimize the final error on the training data examples.
In short, backpropagation will consist of:
- Assigning random values to all weights in the network.
- For every input sample, a feedforward operation is performed to calculate the final output and the prediction error.
- This predicted error is propagated backwards to the previous layer where it is used to adjust weights.
- Once all the layers are finished adjusting weights, use the new weights to calculate the new prediction error.
This process of adjusting weights based on the final output is repeated until an error below the desired threshold is shown.
Visualize backpropagation
The following illustration will help you visualize backpropagation in a network with 1 hidden layer that has 2 hidden units and an output layer with 1 output unit:
Calculate the final expression for the derivative of error w.r.t weights to find a generic formula for weight update.
✏️The final expression for the derivative of error w.r.t weight, i.e.e, after backpropagation is:
...