Strategic Pooling Approaches
Investigate the underlying reasons for max pooling's effectiveness and explore architectural adjustments to enhance pooling efficiency in convolutional networks.
We'll cover the following
In convolutional neural networks (CNNs), the interplay of convolution, activation, and pooling operations is pivotal for feature extraction and dimensionality reduction. While convolution and activation functions introduce complexity and nonlinearity, pooling is instrumental in distilling essential features, which enhances computational efficiency. Despite various pooling methods, max pooling stands out for its effectiveness across numerous datasets.
Let’s explore a plausible reason behind max-pool’s superiority. The expounded reasoning also uncovers an inherent fallacy of distribution distortion in the “convolution → activation → pooling“ structure in traditional networks.
Max-pool superiority
The realization of max pooling importance traces back to biological research in Riesenhuber and Poggio 1998. Riesenhuber and Poggio 1999 provided a biological explanation of max-pool superiority over average. Popular works by deep learning researchers have also advocated for max pooling in Yang et al. 2009, Boureau, Ponce, and LeCun 2010, and Saeedan et al. 2018.
Yang et al. 2009 reported significantly better classification performance on several object classification benchmarks using max pooling compared to others.
Get hands-on with 1400+ tech skills courses.