Top-Down Regularization of Deep Belief Networks

Abstract

Designing a principled and effective algorithm for learning deep architectures is a challenging problem. The current approach involves two training phases: a fully unsupervised learning followed by a strongly discriminative optimization. We suggest a deep learning strategy that bridges the gap between the two phases, resulting in a three-phase learning procedure. We propose to implement the scheme using a method to regularize deep belief networks with top-down information. The network is constructed from building blocks of restricted Boltzmann machines learned by combining bottom-up and top-down sampled signals. A global optimization procedure that merges samples from a forward bottom-up pass and a top-down pass is used. Experiments on the MNIST dataset show improvements over the existing algorithms for deep belief networks. Object recognition results on the Caltech-101 dataset also yield competitive results.

Cite

Text

Goh et al. "Top-Down Regularization of Deep Belief Networks." Neural Information Processing Systems, 2013.

Markdown

[Goh et al. "Top-Down Regularization of Deep Belief Networks." Neural Information Processing Systems, 2013.](https://mlanthology.org/neurips/2013/goh2013neurips-topdown/)

BibTeX

@inproceedings{goh2013neurips-topdown,
  title     = {{Top-Down Regularization of Deep Belief Networks}},
  author    = {Goh, Hanlin and Thome, Nicolas and Cord, Matthieu and Lim, Joo-Hwee},
  booktitle = {Neural Information Processing Systems},
  year      = {2013},
  pages     = {1878-1886},
  url       = {https://mlanthology.org/neurips/2013/goh2013neurips-topdown/}
}