Unraveling the Basics of Neural Networks
Explore the basics of multi-layer perceptrons (MLPs), their structure as networks of neurons, and key concepts such as data preparation, dropout regularization, activation functions, and custom metrics. Understand how to construct and evaluate MLPs effectively using best practices and practical guidelines for deep learning.
We'll cover the following...
Perceptrons were built in the 1950s, and they proved to be a powerful classifier at the time. A few decades later, researchers realized stacking multiple perceptrons could be more powerful. That turned out to be true, and a multi-layer perceptron (MLP) was born.
A single perceptron works like a neuron in a human brain. It takes multiple inputs, and, like a neuron emits an electric pulse, a perceptron emits a binary pulse which is treated as a response.
The “neuron-like” behavior of perceptrons and an MLP being a “network” of perceptrons perhaps led to the term neural networks coming forth in the early days.
Since their creation, neural networks have come a long way. Tremendous advancements have been made in ...