Variational Inference with Continuously-Indexed Normalizing Flows

Abstract

Continuously-indexed flows (CIFs) have recently achieved improvements over baseline normalizing flows on a variety of density estimation tasks. CIFs do not possess a closed-form marginal density, and so, unlike standard flows, cannot be plugged in directly to a variational inference (VI) scheme in order to produce a more expressive family of approximate posteriors. However, we show here how CIFs can be used as part of an auxiliary VI scheme to formulate and train expressive posterior approximations in a natural way. We exploit the conditional independence structure of multi-layer CIFs to build the required auxiliary inference models, which we show empirically yield low-variance estimators of the model evidence. We then demonstrate the advantages of CIFs over baseline flows in VI problems when the posterior distribution of interest possesses a complicated topology, obtaining improved results in both the Bayesian inference and surrogate maximum likelihood settings.

Cite

Text

Caterini et al. "Variational Inference with Continuously-Indexed Normalizing Flows." Uncertainty in Artificial Intelligence, 2021.

Markdown

[Caterini et al. "Variational Inference with Continuously-Indexed Normalizing Flows." Uncertainty in Artificial Intelligence, 2021.](https://mlanthology.org/uai/2021/caterini2021uai-variational/)

BibTeX

@inproceedings{caterini2021uai-variational,
  title     = {{Variational Inference with Continuously-Indexed Normalizing Flows}},
  author    = {Caterini, Anthony and Cornish, Rob and Sejdinovic, Dino and Doucet, Arnaud},
  booktitle = {Uncertainty in Artificial Intelligence},
  year      = {2021},
  pages     = {44-53},
  volume    = {161},
  url       = {https://mlanthology.org/uai/2021/caterini2021uai-variational/}
}