Deep Boltzmann Machines as Feed-Forward Hierarchies

Abstract

The deep Boltzmann machine is a powerful model that extracts the hierarchical structure of observed data. While inference is typically slow due to its undirected nature, we argue that the emerging feature hierarchy is still explicit enough to be traversed in a feed-forward fashion. The claim is corroborated by training a set of deep neural networks on real data and measuring the evolution of the representation layer after layer. The analysis reveals that the deep Boltzmann machine produces a feed-forward hierarchy of increasingly invariant representations that clearly surpasses the layer-wise approach.

Cite

Text

Montavon et al. "Deep Boltzmann Machines as Feed-Forward Hierarchies." Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, 2012.

Markdown

[Montavon et al. "Deep Boltzmann Machines as Feed-Forward Hierarchies." Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, 2012.](https://mlanthology.org/aistats/2012/montavon2012aistats-deep/)

BibTeX

@inproceedings{montavon2012aistats-deep,
  title     = {{Deep Boltzmann Machines as Feed-Forward Hierarchies}},
  author    = {Montavon, Gregoire and Braun, Mikio and Muller, Klaus-Robert},
  booktitle = {Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics},
  year      = {2012},
  pages     = {798-804},
  volume    = {22},
  url       = {https://mlanthology.org/aistats/2012/montavon2012aistats-deep/}
}