Keras Layers API (Part 2)
Learn about the pooling, normalization, and dropout layers of a neural network, and use Keras to implement these layers.
We'll cover the following...
Pooling layers
A pooling layer reduces network parameters and saves subsequent computations. It replaces the input data with a summary statistic of the nearby points. It’s a type of nonlinear downsampling. Usually, we use:
Max pooling: This selects the maximum value within a rectangular neighborhood.
Average pooling: This computes the average output.
The following code implements both pooling types in a 2x2 neighborhood with a stride (slide) of two units, though we can use other size localities and strides in pooling layers.
import tensorflow as tffrom tensorflow import kerasfrom tensorflow.keras import layers# The inputs are RGB images of shape 32*32*3 and the batch size is 8.input_dimensions =(8, 32, 32, 3)input_tensor = tf.random.normal(input_dimensions)max_pool_2d = tf.keras.layers.MaxPooling2D(pool_size=(2, 2),strides=(2, 2),padding='valid')average_pool_2d = tf.keras.layers.AveragePooling2D(pool_size=(2, 2),strides=(2, 2),padding='valid')print(f'\nThe shape of the MaxPooling2D layer output is {max_pool_2d(input_tensor).shape}.',f'\nThe shape of the AveragePooling2D layer output is {average_pool_2d(input_tensor).shape}.')
Normalization layers
Some activation functions, such as a ReLU, can have an unbounded output. If we don’t normalize this output, the learning algorithm finds it difficult to converge. Normalizing the layer inputs to the same scale makes the algorithm numerically more stable and helps it converge fast. Our input data has real values; these are stored as floating-point numbers. Therefore, it’s important to normalize the input data to keep it in a moderate floating-point range. We normalize data