Varieties of Networks: Convolution and Recursive
Learn about the history of improvements in neural networks and the alternative deep models.
We'll cover the following...
Until now, we’ve primarily discussed the basics of neural networks by referencing feedforward networks, where every input is connected to every output in each layer. While these feedforward networks are useful for illustrating how deep networks are trained, they are only one class of a broader set of architectures used in modern applications, including generative models.
Networks for seeing: Convolutional architectures
One of the inspirations for deep neural network models is the biological nervous system. As researchers attempted to design computer vision systems that would mimic the functioning of the visual system, they turned to the architecture of the retina, as revealed by physiological studies by neurobiologists David Huber and Torsten Weisel in the
Huber and Weisel studied the retinal system in cats, showing how their perception of shapes is composed of the activity of individual cells arranged in a column. Each column of cells is designed to detect a specific orientation of an edge in an input image; images of complex shapes are stitched together from these simpler images.
Early CNNs
This idea of columns inspired early research into CNN
When combined, these 12 groups of neurons in layer