Understanding CNNs: Fully Connected Layers

Learn how 2D inputs like images are connected to 1D, fully connected layers.

Fully connected layers

Fully connected layers are a fully connected set of weights from the input to the output. These fully connected weights are able to learn global information as they are connected from each input to each output. Also, having such layers of full connectedness allows us to combine features learned by the convolution layers preceding the fully connected layers globally to produce meaningful outputs.

Let’s define the output of the last convolution or pooling layer to be of size p×o×dp\times o \times d, where pp is the height of the input, oo is the width of the input, and dd is the depth of the input. As an example, think of an RGB image, which will have a fixed height, fixed width, and a depth of 3 (one depth channel for each RGB component).

Then, for the initial fully connected layer found immediately after the last convolution or pooling layer, the weight matrix will be w(m, p×o×d)w^{(m, \space p\times o\times d)}, where height×width×depthheight \times width \times depth of the layer output is the number of output units produced by that last layer, and mm is the number of hidden units in the fully connected layer. Then, during inference (or prediction), we reshape the output of the last convolution/pooling layer to be of size (p×o×d,1)(p \times o \times d, 1)and perform the following matrix multiplication to obtain hh:

Get hands-on with 1400+ tech skills courses.