Composite Denoising Autoencoders
Abstract
In representation learning, it is often desirable to learn features at different levels of scale. For example, in image data, some edges will span only a few pixels, whereas others will span a large portion of the image. We introduce an unsupervised representation learning method called a composite denoising autoencoder (CDA) to address this. We exploit the observation from previous work that in a denoising autoencoder, training with lower levels of noise results in more specific, fine-grained features. In a CDA, different parts of the network are trained with different versions of the same input, corrupted at different noise levels. We introduce a novel cascaded training procedure which is designed to avoid types of bad solutions that are specific to CDAs. We show that CDAs learn effective representations on two different image data sets.
Cite
Text
Geras and Sutton. "Composite Denoising Autoencoders." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2016. doi:10.1007/978-3-319-46128-1_43Markdown
[Geras and Sutton. "Composite Denoising Autoencoders." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2016.](https://mlanthology.org/ecmlpkdd/2016/geras2016ecmlpkdd-composite/) doi:10.1007/978-3-319-46128-1_43BibTeX
@inproceedings{geras2016ecmlpkdd-composite,
title = {{Composite Denoising Autoencoders}},
author = {Geras, Krzysztof J. and Sutton, Charles},
booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
year = {2016},
pages = {681-696},
doi = {10.1007/978-3-319-46128-1_43},
url = {https://mlanthology.org/ecmlpkdd/2016/geras2016ecmlpkdd-composite/}
}