Update Weights

Learn how to reduce errors by updating the weights.

Error-controlled weights update

We have not yet discussed the central question of updating the link weights in a neural network. We’ve been working toward this point, and we’re almost there. We have just one more key idea to cover before we unlock this secret.

So far, we’ve propagated the errors back to each layer of the network. Why did we do this? Because the error is used to guide how we adjust the link weights to improve the overall answer given by the neural network. This is basically what we were doing with the linear classifier at the start of this course.

But these nodes aren’t simple linear classifiers. These slightly more sophisticated nodes sum the weighted signals into the node and apply the sigmoid threshold function. So, how do we actually update the weights for links that connect these more sophisticated nodes? Why can’t we use fancy algebra to work out what the weights should be?

We can’t calculate the weights directly because the math is so complex. There are just too many combinations of weights and too many functions of functions being combined when we feed the signal forward through the network. Think about just a small neural ...

Access this course and 1400+ top-rated courses and projects.