Learning Dynamics of Linear Denoising Autoencoders

Abstract

Denoising autoencoders (DAEs) have proven useful for unsupervised representation learning, but a thorough theoretical understanding is still lacking of how the input noise influences learning. Here we develop theory for how noise influences learning in DAEs. By focusing on linear DAEs, we are able to derive analytic expressions that exactly describe their learning dynamics. We verify our theoretical predictions with simulations as well as experiments on MNIST and CIFAR-10. The theory illustrates how, when tuned correctly, noise allows DAEs to ignore low variance directions in the inputs while learning to reconstruct them. Furthermore, in a comparison of the learning dynamics of DAEs to standard regularised autoencoders, we show that noise has a similar regularisation effect to weight decay, but with faster training dynamics. We also show that our theoretical predictions approximate learning dynamics on real-world data and qualitatively match observed dynamics in nonlinear DAEs.

Cite

Text

Pretorius et al. "Learning Dynamics of Linear Denoising Autoencoders." International Conference on Machine Learning, 2018.

Markdown

[Pretorius et al. "Learning Dynamics of Linear Denoising Autoencoders." International Conference on Machine Learning, 2018.](https://mlanthology.org/icml/2018/pretorius2018icml-learning/)

BibTeX

@inproceedings{pretorius2018icml-learning,
  title     = {{Learning Dynamics of Linear Denoising Autoencoders}},
  author    = {Pretorius, Arnu and Kroon, Steve and Kamper, Herman},
  booktitle = {International Conference on Machine Learning},
  year      = {2018},
  pages     = {4141-4150},
  volume    = {80},
  url       = {https://mlanthology.org/icml/2018/pretorius2018icml-learning/}
}