Neural Networks

Understand the neural network, how it is formulated, and why it works.

What is a neuron?

In the world of Neural Networks (NN), the basic building block is the neuron. NNs are nothing more than a collection of neurons organized in layers, with information passing from one layer to the other. So to understand NN, we first need to understand the neuron: the basic computing unit.

Mathematically, we have:

y=w1x1+w2x2+w3x3y = w_1*x_1 +w_2*x_2+ w_3*x_3

A neuron is simply a linear classifier with a single output.

import torch.nn as nn

neuron = nn.Linear(3,1, bias=False) 

Looks familiar? In most applications, we also add a bias bb to shift the position of the boundary line that separates the data points. This is the infamous Perceptron.

To extend this idea, we also pass this weighted average through a non-linear function σ\sigma that will give us the decision boundary.

Why?

Because with non-linear functions between linear layers, we can model much more complex representations with less linear layers.

Non-linearities is a key component that makes NN very rich function approximators.

Putting it all together, we have:

y=σ(w1x1+w2x2+w3x3+b)=σ(wx+b)y =\sigma( w_1*x_1 +w_2*x_2+ w_3*x_3+b) = \sigma(w*x+b) ...

Access this course and 1400+ top-rated courses and projects.