Images and Filters
Distinguish between convolution kernels and filters.
We'll cover the following...
Weights of a convolutional layer
Every node in a fully connected layer has its own weight. Our goal is to find optimal weights that give the model the ability to predict the data provided during inference time. We also learned convolution is a particular multiplication between kernel elements and input data. What we called kernel elements until now are the weights of the convolutional layer, and training a convolutional neural network means optimizing these weights to make the model learn what we want it to predict later.
In other words, you might hear specific kernels used in the image processing field, like edge detection, sharpening, and blurring kernels, having already determined values inside. All we have to do is use an edge detection kernel to detect the edges of an image. Here, our kernels become specific at the end of our training, and then we are ready to take our CNN layers and apply them to our image.
Filters
Knowing that window, kernel, and mask are different ways of referring to what we called convolution window until now, it’s time to use the term filter.
A digital image is a 2D matrix, so what we learned until now is valid for images. The main difference is that if we talk about RGB images, our image matrix becomes a 3-channel 2D matrix rather than a simple 2D matrix.
So if we have a ...