Doubly Semi-Implicit Variational Inference

Abstract

We extend the existing framework of semi-implicit variational inference (SIVI) and introduce doubly semi-implicit variational inference (DSIVI), a way to perform variational inference and learning when both the approximate posterior and the prior distribution are semi-implicit. In other words, DSIVI performs inference in models where the prior and the posterior can be expressed as an intractable infinite mixture of some analytic density with a highly flexible implicit mixing distribution. We provide a sandwich bound on the evidence lower bound (ELBO) objective that can be made arbitrarily tight. Unlike discriminator-based and kernel-based approaches to implicit variational inference, DSIVI optimizes a proper lower bound on ELBO that is asymptotically exact. We evaluate DSIVI on a set of problems that benefit from implicit priors. In particular, we show that DSIVI gives rise to a simple modification of VampPrior, the current state-of-the-art prior for variational autoencoders, which improves its performance.

Cite

Text

Molchanov et al. "Doubly Semi-Implicit Variational Inference." Artificial Intelligence and Statistics, 2019.

Markdown

[Molchanov et al. "Doubly Semi-Implicit Variational Inference." Artificial Intelligence and Statistics, 2019.](https://mlanthology.org/aistats/2019/molchanov2019aistats-doubly/)

BibTeX

@inproceedings{molchanov2019aistats-doubly,
  title     = {{Doubly Semi-Implicit Variational Inference}},
  author    = {Molchanov, Dmitry and Kharitonov, Valery and Sobolev, Artem and Vetrov, Dmitry},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2019},
  pages     = {2593-2602},
  volume    = {89},
  url       = {https://mlanthology.org/aistats/2019/molchanov2019aistats-doubly/}
}