Change the Network Shape
Explore the effect of the number of hidden layers on the accuracy of results.
We'll cover the following
Structure of the neural network
One thing we haven’t yet tried, and perhaps we should have earlier, is to change the shape of the neural network. Let’s try changing the number of middle hidden layer nodes. We’ve had them set to for far too long!
Before we jump in and run experiments with different numbers of hidden nodes, let’s think about what might happen. The hidden layer is the layer where the learning happens. Remember, the input nodes simply bring in the input signals, and the output nodes simply push out the network’s answer. It’s the hidden layer (or layers), that has to turn the input into the answer. It’s where the learning happens. Actually, it’s the link weights before and after the hidden nodes that do the learning.
If we had too few hidden nodes, say three, we can imagine there is not enough space to learn whatever a network learns, and to somehow turn all the inputs into the correct outputs. It would be like asking a car with five seats to carry ten people. We just can’t fit that much stuff inside. Computer scientists call this kind of limit a learning capacity. We can’t learn more than the learning capacity, but we can change the vehicle, or the network shape, to increase the capacity.
What if we had 10,000 hidden nodes? Well, we won’t be short of learning capacity, but we might find it harder to train the network because now there are too many options for where the learning should go. Maybe it would take 10,000 epochs to train such a network.
The effect of hidden layers on performance
Let’s run some experiments and see what happens.
Get hands-on with 1400+ tech skills courses.