...

/

Error Backpropagation from More Output Nodes

Error Backpropagation from More Output Nodes

Learn about error backpropagation from the output node into the network.

Backpropagation of errors

This method is used extensively when there is more than one node that contributes to the output. The following diagram shows a simple network with two input nodes, but with two output nodes.

Press + to interact
Output errors and weights in neural network
Output errors and weights in neural network

Both output nodes can have an error. In fact, that’s extremely likely when we haven’t trained the network. We can see that both of these errors need to inform the refinement of the internal link weights in the network. We can use the same approach as before, where we split an output node’s error among the contributing links in a way that’s proportionate to their weights.

The fact that we have more than one output node doesn’t really change anything. We simply repeat what we already did for the first output node for the second one. It is a simple process because the links into an output node don’t depend on the links into another output node. There is no dependence between these two sets of links.

Error splitting

Looking at that diagram again, we have labeled the error at the first output node as e1e_1. Remember, this is the difference between the desired output provided by the training data t1t_1 and the actual output o1o_1. That is, e1=(t1o1)e_1 = ( t_1 - o_1 ). The error at the second output node is labeled e2e_2.

We can see from the diagram that the error e1e_1 is split proportionally to the connected links with the weights w11w_{11} ...