Improved Variational Inference with Inverse Autoregressive Flow

Abstract

The framework of normalizing flows provides a general strategy for flexible variational inference of posteriors over latent variables. We propose a new type of normalizing flow, inverse autoregressive flow (IAF), that, in contrast to earlier published flows, scales well to high-dimensional latent spaces. The proposed flow consists of a chain of invertible transformations, where each transformation is based on an autoregressive neural network. In experiments, we show that IAF significantly improves upon diagonal Gaussian approximate posteriors. In addition, we demonstrate that a novel type of variational autoencoder, coupled with IAF, is competitive with neural autoregressive models in terms of attained log-likelihood on natural images, while allowing significantly faster synthesis.

Cite

Text

Kingma et al. "Improved Variational Inference with Inverse Autoregressive Flow." Neural Information Processing Systems, 2016.

Markdown

[Kingma et al. "Improved Variational Inference with Inverse Autoregressive Flow." Neural Information Processing Systems, 2016.](https://mlanthology.org/neurips/2016/kingma2016neurips-improved/)

BibTeX

@inproceedings{kingma2016neurips-improved,
  title     = {{Improved Variational Inference with Inverse Autoregressive Flow}},
  author    = {Kingma, Diederik P. and Salimans, Tim and Jozefowicz, Rafal and Chen, Xi and Sutskever, Ilya and Welling, Max},
  booktitle = {Neural Information Processing Systems},
  year      = {2016},
  pages     = {4743-4751},
  url       = {https://mlanthology.org/neurips/2016/kingma2016neurips-improved/}
}