Deep Boltzmann Machines
Abstract
We present a new learning algorithm for Boltzmann machines that contain many layers of hidden variables. Data-dependent expectations are estimated using a variational approximation that tends to focus on a single mode, and data-independent expectations are approximated using persistent Markov chains. The use of two quite different techniques for estimating the two types of expectation that enter into the gradient of the log-likelihood makes it practical to learn Boltzmann machines with multiple hidden layers and millions of parameters. The learning can be made more efficient by using a layer-by-layer “pre-training” phase that allows variational inference to be initialized by a single bottom-up pass. We present results on the MNIST and NORB datasets showing that deep Boltzmann machines learn good generative models and perform well on handwritten digit and visual object recognition tasks.
Cite
Text
Salakhutdinov and Hinton. "Deep Boltzmann Machines." Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics, 2009.Markdown
[Salakhutdinov and Hinton. "Deep Boltzmann Machines." Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics, 2009.](https://mlanthology.org/aistats/2009/salakhutdinov2009aistats-deep/)BibTeX
@inproceedings{salakhutdinov2009aistats-deep,
title = {{Deep Boltzmann Machines}},
author = {Salakhutdinov, Ruslan and Hinton, Geoffrey},
booktitle = {Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics},
year = {2009},
pages = {448-455},
volume = {5},
url = {https://mlanthology.org/aistats/2009/salakhutdinov2009aistats-deep/}
}