The Error Slope between the Input and Hidden Layers
Learn how to find the error slope for the weights between the input and hidden layers.
We'll cover the following
The error rate between the input and hidden layers
That expression we just looked at is for refining the weights between the hidden and output layers. Now we need to finish the job and find a similar error slope for the weights between the input and hidden layers.
We could do loads of algebra again, but we don’t have to. We can simply use that physical interpretation we just did and rebuild an expression for the new set of weights we’re interested in:
-
The first part, which was the error, now becomes the recombined backpropagated error out of the hidden nodes, just like we saw before. Let’s call that .
-
The sigmoid parts can stay the same, but the sum expressions inside refer to the preceding layers. So the sum is over all the inputs moderated by the weights into a hidden node . We can call this .
-
The last part is now the output of the first layer of nodes , which happen to be the input signals.
This nifty way of avoiding work is simply taking advantage of the symmetry in the problem to construct a new expression. We say it’s simple, but it is a very powerful technique, wielded by some of the smartest mathematicians and scientists.
So, the second part of the final answer we’ve been striving towards is the slope of the error function for the weights between the input and hidden layers:
Get hands-on with 1200+ tech skills courses.