Meta-Amortized Variational Inference and Learning
Abstract
Despite the recent success in probabilistic modeling and their applications, generative models trained using traditional inference techniques struggle to adapt to new distributions, even when the target distribution may be closely related to the ones seen during training. In this work, we present a doubly-amortized variational inference procedure as a way to address this challenge. By sharing computation across not only a set of query inputs, but also a set of different, related probabilistic models, we learn transferable latent representations that generalize across several related distributions. In particular, given a set of distributions over images, we find the learned representations to transfer to different data transformations. We empirically demonstrate the effectiveness of our method by introducing the MetaVAE, and show that it significantly outperforms baselines on downstream image classification tasks on MNIST (10-50%) and NORB (10-35%).
Cite
Text
Wu et al. "Meta-Amortized Variational Inference and Learning." AAAI Conference on Artificial Intelligence, 2020. doi:10.1609/AAAI.V34I04.6111Markdown
[Wu et al. "Meta-Amortized Variational Inference and Learning." AAAI Conference on Artificial Intelligence, 2020.](https://mlanthology.org/aaai/2020/wu2020aaai-meta/) doi:10.1609/AAAI.V34I04.6111BibTeX
@inproceedings{wu2020aaai-meta,
title = {{Meta-Amortized Variational Inference and Learning}},
author = {Wu, Mike and Choi, Kristy and Goodman, Noah D. and Ermon, Stefano},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2020},
pages = {6404-6412},
doi = {10.1609/AAAI.V34I04.6111},
url = {https://mlanthology.org/aaai/2020/wu2020aaai-meta/}
}