Error Backpropagation from More Output Nodes
Learn about error backpropagation from the output node into the network.
We'll cover the following...
Backpropagation of errors
This method is used extensively when there is more than one node that contributes to the output. The following diagram shows a simple network with two input nodes, but with two output nodes.
Both output nodes can have an error. In fact, that’s extremely likely when we haven’t trained the network. We can see that both of these errors need to inform the refinement of the internal link weights in the network. We can use the same approach as before, where we split an output node’s error among the contributing links in a way that’s proportionate to their weights.
The fact that we have more than one output node doesn’t really change anything. We simply repeat what we already did for the first output node for the second one. It is a simple process because the links into an output node don’t depend on the links into another output node. There is no dependence between these two sets of links.
Error splitting
Looking at that diagram again, we have labeled the error at the first output node as . Remember, this is the difference between the desired output provided by the training data and the actual output . That is, . The error at the second output node is labeled .
We can see from the diagram that the error is split proportionally to the connected links with the weights ...