...

/

Model—Artificial Neural Networks

Model—Artificial Neural Networks

Learn about the anatomy of an artificial neural network, it's different layers and how it learns non-separable data.

The perceptron revisited

Let’s look at the perceptron model, or a simple perceptron, again. A simple perceptron is composed of a single neuron that receives each data point as input, performs some computation on it, and activates the output 1 if the computation is above a threshold. We have already seen how it looks:

Press + to interact
Perceptron
Perceptron

We can consider a perceptron to be made up of layers. The first layer depicts the features of data in the input space. The layer depicting the output being calculated is called the output layer. In the above perceptron model, there is only one neuron at the output layer.

Since the output neuron is connected to the inputs directly, a simple perceptron is capable of learning a single decision boundary in the input space. In other words, it creates a linear partition of the input space.

Press + to interact
Data is separable using a single perceptron
Data is separable using a single perceptron

What happens if our data is not so neatly separable? We saw an example in the previous lesson of the newly obtained dataset of movies. A single linear decision boundary is not sufficient to separate this data in the input space now, as we can see in the figure below. This also means that our simple perceptron model will not be able to learn the ideal classification boundary in this slightly complex dataset of movies because it only learns a linear partition of the input space.

Press + to interact
Data not separable using a single perceptron
Data not separable using a single perceptron

What does an ideal decision boundary look like for this new movie's data?

Press + to interact
Two lines can create a non-linear boundary between classes
Two lines can create a non-linear boundary between classes

Let’s look at this a bit more closely. Instead of a single line, now we have two lines that separate the movie data in the input space. A single perceptron (aka neuron) is capable of learning only one line in the input space. What if we use two neurons now, to learn these two lines? Sounds like a solution! After all, our human brain, from which ML takes inspiration, has more than one (actually many!) neurons. It is actually a whole network of neurons!

To learn a complex decision boundary in the input space, we need more than one neurons in the model.

Adding more neurons to the model

We now know that to classify a complex dataset, such as we have seen above, we need a more complex model with more than one neuron. Let’s add another neuron to our simple perceptron model above. Where do we place it? We need an input layer with three inputs, ...