Glow: Generative Flow with Invertible 1x1 Convolutions

Abstract

Flow-based generative models are conceptually attractive due to tractability of the exact log-likelihood, tractability of exact latent-variable inference, and parallelizability of both training and synthesis. In this paper we propose Glow, a simple type of generative flow using invertible 1x1 convolution. Using our method we demonstrate a significant improvement in log-likelihood and qualitative sample quality. Perhaps most strikingly, we demonstrate that a generative model optimized towards the plain log-likelihood objective is capable of efficient synthesis of large and subjectively realistic-looking images.

Cite

Text

Kingma and Dhariwal. "Glow: Generative Flow with Invertible 1x1 Convolutions." Neural Information Processing Systems, 2018.

Markdown

[Kingma and Dhariwal. "Glow: Generative Flow with Invertible 1x1 Convolutions." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/kingma2018neurips-glow/)

BibTeX

@inproceedings{kingma2018neurips-glow,
  title     = {{Glow: Generative Flow with Invertible 1x1 Convolutions}},
  author    = {Kingma, Diederik P. and Dhariwal, Prafulla},
  booktitle = {Neural Information Processing Systems},
  year      = {2018},
  pages     = {10215-10224},
  url       = {https://mlanthology.org/neurips/2018/kingma2018neurips-glow/}
}