Conservativeness of Untied Auto-Encoders
Abstract
We discuss necessary and sufficient conditions for an auto-encoder to define a conservative vector field, in which case it is associated with anenergy function akin to the unnormalized log-probability of the data.We show that the conditions for conservativeness are more general than for encoder and decoder weights to be the same ("tied weights''), and thatthey also depend on the form of the hidden unit activation functions.Moreover, we show that contractive training criteria, such as denoising, enforces these conditions locally.Based on these observations, we show how we can use auto-encoders to extract the conservative component of a vector field.
Cite
Text
Im et al. "Conservativeness of Untied Auto-Encoders." AAAI Conference on Artificial Intelligence, 2016. doi:10.1609/AAAI.V30I1.10268Markdown
[Im et al. "Conservativeness of Untied Auto-Encoders." AAAI Conference on Artificial Intelligence, 2016.](https://mlanthology.org/aaai/2016/im2016aaai-conservativeness/) doi:10.1609/AAAI.V30I1.10268BibTeX
@inproceedings{im2016aaai-conservativeness,
title = {{Conservativeness of Untied Auto-Encoders}},
author = {Im, Daniel Jiwoong and Belghazi, Mohamed Ishmael Diwan and Memisevic, Roland},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2016},
pages = {1694-1700},
doi = {10.1609/AAAI.V30I1.10268},
url = {https://mlanthology.org/aaai/2016/im2016aaai-conservativeness/}
}