Hidden Layer
An overview of the limitations of a single layer perceptron model.
We'll cover the following...
Chapter Goals:
- Add a hidden layer to the model's layers
- Understand the purpose of non-linear activations
- Learn about the ReLU activation function
A. Why a single layer is limited
In the previous chapter we saw that the single layer perceptron was unable to classify points as being inside or outside a circle centered around the origin. This was because the output of the model (the logits) only had connections directly from the input features.
Why exactly is this a limitation? Think about how the connections work in a single layer neural network. The neurons in the output layer have a connection coming in from each of the neurons in the input layer, but the connection weights are all just real numbers.
The single layer neural network architecture. The weight values are denoted w1, w2, and w3.
Based on the diagram, the logits can be calculated as a linear combination of the input layer and weights:
...