Reinterpreting Importance-Weighted Autoencoders
Abstract
The standard interpretation of importance-weighted autoencoders is that they maximize a tighter lower bound on the marginal likelihood than the standard evidence lower bound. We give an alternate interpretation of this procedure: that it optimizes the standard variational lower bound, but using a more complex distribution. We formally derive this result, present a tighter lower bound, and visualize the implicit importance-weighted distribution.
Cite
Text
Cremer et al. "Reinterpreting Importance-Weighted Autoencoders." International Conference on Learning Representations, 2017.Markdown
[Cremer et al. "Reinterpreting Importance-Weighted Autoencoders." International Conference on Learning Representations, 2017.](https://mlanthology.org/iclr/2017/cremer2017iclr-reinterpreting/)BibTeX
@inproceedings{cremer2017iclr-reinterpreting,
title = {{Reinterpreting Importance-Weighted Autoencoders}},
author = {Cremer, Chris and Morris, Quaid and Duvenaud, David},
booktitle = {International Conference on Learning Representations},
year = {2017},
url = {https://mlanthology.org/iclr/2017/cremer2017iclr-reinterpreting/}
}