The Boltzmann Machine
Learn about the Boltzmann machine model in detail.
We'll cover the following
Advancements over the Hopfield network
A much more general form of a Hopfield network was introduced by Geoffrey Hinton and Terrance Sejnowski in the mid-1980s, which they called the Boltzmann machine.
These recurrent networks incorporate two more important aspects as follows:
-
They considered hidden nodes that are not connected to the outside world directly. As with perceptrons, hidden nodes allow an unlimited internal structure that allows, in principle, an unbounded complexity of internal computations.
-
The Boltzmann machine considers stochastic nodes and, therefore, represents a general probabilistic model.
Such a stochastic dynamic network, a recurrent system with hidden nodes, together with the adjustable connections, provides the system with enough degrees of freedom to approximate any dynamical system. While this has been recognized for a long time, finding practical training rules for such systems has been a major challenge, for which there was only recently major progress. These machines use unsupervised learning to learn hierarchical representations based on the statistics of the world. Such representations are key to more advanced applications of machine learning and to human abilities.
Example of the Boltzmann network
The basic building block of a Boltzmann network is a single-layer network with one visible layer and one hidden layer. An example of such a network is shown in the figure below. The nodes represent a random variable similar to the Bayesian networks discussed earlier.
Get hands-on with 1400+ tech skills courses.