An Analysis of Unsupervised Pre-Training in Light of Recent Advances

Abstract

Convolutional neural networks perform well on object recognition because of a number of recent advances: rectified linear units (ReLUs), data augmentation, dropout, and large labelled datasets. Unsupervised data has been proposed as another way to improve performance. Unfortunately, unsupervised pre-training is not used by state-of-the-art methods leading to the following question: Is unsupervised pre-training still useful given recent advances? If so, when? We answer this in three parts: we 1) develop an unsupervised method that incorporates ReLUs and recent unsupervised regularization techniques, 2) analyze the benefits of unsupervised pre-training compared to data augmentation and dropout on CIFAR-10 while varying the ratio of unsupervised to supervised samples, 3) verify our findings on STL-10. We discover unsupervised pre-training, as expected, helps when the ratio of unsupervised to supervised samples is high, and surprisingly, hurts when the ratio is low. We also use unsupervised pre-training with additional color augmentation to achieve near state-of-the-art performance on STL-10.

Cite

Text

Le Paine et al. "An Analysis of Unsupervised Pre-Training in Light of Recent Advances." International Conference on Learning Representations, 2015.

Markdown

[Le Paine et al. "An Analysis of Unsupervised Pre-Training in Light of Recent Advances." International Conference on Learning Representations, 2015.](https://mlanthology.org/iclr/2015/paine2015iclr-analysis/)

BibTeX

@inproceedings{paine2015iclr-analysis,
  title     = {{An Analysis of Unsupervised Pre-Training in Light of Recent Advances}},
  author    = {Le Paine, Tom and Khorrami, Pooya and Han, Wei and Huang, Thomas S.},
  booktitle = {International Conference on Learning Representations},
  year      = {2015},
  url       = {https://mlanthology.org/iclr/2015/paine2015iclr-analysis/}
}