An Empirical Evaluation of Deep Architectures on Problems with Many Factors of Variation
Abstract
Recently, several learning algorithms relying on models with deep architectures have been proposed. Though they have demonstrated impressive performance, to date, they have only been evaluated on relatively simple problems such as digit recognition in a controlled environment, for which many machine learning algorithms already report reasonable results. Here, we present a series of experiments which indicate that these models show promise in solving harder learning problems that exhibit many factors of variation. These models are compared with well-established algorithms such as Support Vector Machines and single hidden-layer feed-forward neural networks.
Cite
Text
Larochelle et al. "An Empirical Evaluation of Deep Architectures on Problems with Many Factors of Variation." International Conference on Machine Learning, 2007. doi:10.1145/1273496.1273556Markdown
[Larochelle et al. "An Empirical Evaluation of Deep Architectures on Problems with Many Factors of Variation." International Conference on Machine Learning, 2007.](https://mlanthology.org/icml/2007/larochelle2007icml-empirical/) doi:10.1145/1273496.1273556BibTeX
@inproceedings{larochelle2007icml-empirical,
title = {{An Empirical Evaluation of Deep Architectures on Problems with Many Factors of Variation}},
author = {Larochelle, Hugo and Erhan, Dumitru and Courville, Aaron C. and Bergstra, James and Bengio, Yoshua},
booktitle = {International Conference on Machine Learning},
year = {2007},
pages = {473-480},
doi = {10.1145/1273496.1273556},
url = {https://mlanthology.org/icml/2007/larochelle2007icml-empirical/}
}