Regularization for Unsupervised Deep Neural Nets

Abstract

Unsupervised neural networks, such as restricted Boltzmann machines (RBMs) and deep belief networks (DBNs), are powerful tools for feature selection and pattern recognition tasks. We demonstrate that overfitting occurs in such models just as in deep feedforward neural networks, and discuss possible regularization methods to reduce overfitting. We also propose a "partial" approach to improve the efficiency of Dropout/DropConnect in this scenario, and discuss the theoretical justification of these methods from model convergence and likelihood bounds. Finally, we compare the performance of these methods based on their likelihood and classification error rates for various pattern recognition data sets.

Cite

Text

Wang and Klabjan. "Regularization for Unsupervised Deep Neural Nets." AAAI Conference on Artificial Intelligence, 2017. doi:10.1609/AAAI.V31I1.10787

Markdown

[Wang and Klabjan. "Regularization for Unsupervised Deep Neural Nets." AAAI Conference on Artificial Intelligence, 2017.](https://mlanthology.org/aaai/2017/wang2017aaai-regularization/) doi:10.1609/AAAI.V31I1.10787

BibTeX

@inproceedings{wang2017aaai-regularization,
  title     = {{Regularization for Unsupervised Deep Neural Nets}},
  author    = {Wang, Baiyang and Klabjan, Diego},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2017},
  pages     = {2681-2687},
  doi       = {10.1609/AAAI.V31I1.10787},
  url       = {https://mlanthology.org/aaai/2017/wang2017aaai-regularization/}
}