Use Gradient Descent to Update Weights
Learn how to design the function that will pass to gradient descent algorithm to update the weights in our network.
We'll cover the following
The calculus behind error minimization
To do gradient descent, we need to work out the slope of the error function with respect to the weights. This requires calculus. Calculus is simply a mathematically precise way of working out how something changes when something else does. For example, we could calculate how the length of a spring changes as the force used to stretch it changes. Here, we’re interested in how the error function depends on the link weights inside a neural network. Another way of asking this is, “How sensitive is the error to changes in the link weights?”
Let’s start with a picture, because that helps keep us visualize what we are trying to achieve.
Get hands-on with 1200+ tech skills courses.