Unraveling the Basics of Neural Networks
Explore the progression of perceptrons to MLPs, their architecture, and deep learning fundamentals.
We'll cover the following
Perceptrons were built in the 1950s, and they proved to be a powerful classifier at the time. A few decades later, researchers realized stacking multiple perceptrons could be more powerful. That turned out to be true, and a multi-layer perceptron (MLP) was born.
A single perceptron works like a neuron in a human brain. It takes multiple inputs, and, like a neuron emits an electric pulse, a perceptron emits a binary pulse which is treated as a response.
The “neuron-like” behavior of perceptrons and an MLP being a “network” of perceptrons perhaps led to the term neural networks coming forth in the early days.
Since their creation, neural networks have come a long way. Tremendous advancements have been made in various architectures, such as convolutional neural networks, recurrent neural networks, and many more.
Despite all the advancements, MLPs are still actively used. It’s the “hello world” to deep learning. Similar to linear regression in machine learning, MLP is an immortal method that remains active due to its robustness.
Get hands-on with 1400+ tech skills courses.