Gradual Training Method for Denoising Auto Encoders

Abstract

Stacked denoising auto encoders (DAEs) are well known to learn useful deep representations, which can be used to improve supervised training by initializing a deep network. We investigate a training scheme of a deep DAE, where DAE layers are gradually added and keep adapting as additional layers are added. We show that in the regime of mid-sized datasets, this gradual training provides a small but consistent improvement over stacked training in both reconstruction quality and classification error over stacked training on MNIST and CIFAR datasets.

Cite

Text

Kalmanovich and Chechik. "Gradual Training Method for Denoising Auto Encoders." International Conference on Learning Representations, 2015.

Markdown

[Kalmanovich and Chechik. "Gradual Training Method for Denoising Auto Encoders." International Conference on Learning Representations, 2015.](https://mlanthology.org/iclr/2015/kalmanovich2015iclr-gradual/)

BibTeX

@inproceedings{kalmanovich2015iclr-gradual,
  title     = {{Gradual Training Method for Denoising Auto Encoders}},
  author    = {Kalmanovich, Alexander and Chechik, Gal},
  booktitle = {International Conference on Learning Representations},
  year      = {2015},
  url       = {https://mlanthology.org/iclr/2015/kalmanovich2015iclr-gradual/}
}