Introduction to Convolutional Neural Networks (CNNs)

Learn about the working of convolutional neural networks (CNN).

Filters and convolutions

We mentioned earlier that an SVM\text{SVM} is like an improved perceptron in that it takes regularization and some form of feature transformation into account. We should, therefore, ask ourselves why the MLP\text{MLP} is then outperforming an SVM\text{SVM} on the MNIST\text{MNIST} data, which achieves only around 9494 percent accuracy with the sklearn implementation. The main difference is that here we used more layers with adjustable parameters compared to the SVM\text{SVM}, which we can view as a one-layer (smart) perceptron with an additional preprocessing step of transforming inputs into a high-dimensional representation of the feature space defined by the kernel. The use of additional layers with learned parameters allows for the learning of hierarchical features. We will illustrate this point in more detail later, but for now, it is sufficient to say that deep learning enables the learning of hierarchical representations that are difficult to match with shallow (less sequential) operations. Increasing the number of layers has allowed successful applications of neural networks to more complex problems. Models with tens or even hundreds of layers are now not uncommon.

Get hands-on with 1400+ tech skills courses.