Backpropagation Algorithm
Take a look at the mathematics of the backpropagation algorithm.
We'll cover the following...
Neural Networks (NN) are non-linear classifiers that can be formulated as a series of matrix multiplications. Just like linear classifiers, they can be trained using the same principles we followed before, namely the gradient descent algorithm. The difficulty arises in computing the gradients.
But first things first.
Let’s start with a straightforward example of a two-layered NN, with each layer containing just one neuron.
Notations
- The superscript defines the layer that we are in.
- denotes the activation of layer L.
- is a scalar weight of the layer L.
- is the bias term of layer L.
- is the cost function, is our target class, and is the activation function.
Forward pass
Our lovely model would look something like this in a simple sketch:
We can write the output of a neuron at layer as:
To simplify things, let’s define:
so that our basic equation will become:
We also know that our loss function is:
This is the so-called forward pass. We take some input and pass it through the network. From the output of the network, we can compute the loss .
Backward pass
Backward pass is the process of adjusting the weights in all the layers to minimize the loss .
To adjust the weights based on the training example, we can use our known update rule:
where ...
Create a free account to view this lesson.
By signing up, you agree to Educative's Terms of Service and Privacy Policy