Restricted Boltzmann Machines

Learn how simple networks can “learn” the distribution of image data and serve as building blocks for larger networks.

The neural network model that we will apply to the MNIST data has its origins in earlier research on how neurons in the mammalian brain might work together to transmit signals and encode patterns as memories. As we discussed earlier, Hebbian Learning states, “Neurons that fire together, wire togetherGurney, Kevin. 2018. An Introduction to Neural Networks. CRC Press.,” and many models, including the multilayer perceptron, made use of this idea to develop learning rules.

One of these models was the Hopfield networkSathasivam, Saratha (2008). Logic Learning in Hopfield Networks., developed in the 1970s–80s by several researchersHebb, D. O.. The organization of behavior: A neuropsychological theory. Lawrence Erlbaum, 2002.. In this network, each “neuron” is connected to every other by a symmetric weight, but no self-connections (there are only connections between neurons, no self-loops). Unlike the multilayer perceptrons and other architectures we studied, the Hopfield network is an undirected graph since the edges go both ways.

Get hands-on with 1400+ tech skills courses.