...
/Understanding Multi-Layer Perceptrons
Understanding Multi-Layer Perceptrons
Unveil the structure and operational flow of MLPs, from input processing to backpropagation in deep learning.
We'll cover the following...
Multi-layer perceptrons are possibly one of the most visually illustrated neural networks. Yet most of them lack a few fundamental explanations. Since MLPs are the foundation of deep learning, this section provides a clearer perspective.
MLP architecture
A typical visual representation of an MLP is shown in the illustration above where:
- on the left represent the inputs.
- The middle nodes represent the hidden layers.
- They layer on the right is the output.
This high-level representation shows the feed-forward nature of the network. In a feed-forward network, information between layers flows in only the forward direction. Information (features) learned at a layer is not shared with any prior layer.
The abstracted network shown in the illustration above is unwrapped to its elements in the illustration below.
MLP workflow
The journey through an MLP involves a series of meticulously orchestrated steps, each contributing to the network’s ability to learn and make predictions. Each element, its interactions, and its implementation in the context of TensorFlow are explained step-by-step as follows:
- Data introduction: The process starts with a dataset. Suppose a dataset is shown at the top left,