Saturating Auto-Encoder
Abstract
We introduce a simple new regularizer for auto-encoders whose hidden-unit activation functions contain at least one zero-gradient (saturated) region. This regularizer explicitly encourages activations in the saturated region(s) of the corresponding activation function. We call these Saturating Auto-Encoders (SATAE). We show that the saturation regularizer explicitly limits the SATAE's ability to reconstruct inputs which are not near the data manifold. Furthermore, we show that a wide variety of features can be learned when different activation functions are used. Finally, connections are established with the Contractive and Sparse Auto-Encoders.
Cite
Text
Goroshin and LeCun. "Saturating Auto-Encoder." International Conference on Learning Representations, 2013. doi:10.48550/arxiv.1301.3577Markdown
[Goroshin and LeCun. "Saturating Auto-Encoder." International Conference on Learning Representations, 2013.](https://mlanthology.org/iclr/2013/goroshin2013iclr-saturating/) doi:10.48550/arxiv.1301.3577BibTeX
@inproceedings{goroshin2013iclr-saturating,
title = {{Saturating Auto-Encoder}},
author = {Goroshin, Rostislav and LeCun, Yann},
booktitle = {International Conference on Learning Representations},
year = {2013},
doi = {10.48550/arxiv.1301.3577},
url = {https://mlanthology.org/iclr/2013/goroshin2013iclr-saturating/}
}