Variational Bayesian Monte Carlo
Abstract
Many probabilistic models of interest in scientific computing and machine learning have expensive, black-box likelihoods that prevent the application of standard techniques for Bayesian inference, such as MCMC, which would require access to the gradient or a large number of likelihood evaluations. We introduce here a novel sample-efficient inference framework, Variational Bayesian Monte Carlo (VBMC). VBMC combines variational inference with Gaussian-process based, active-sampling Bayesian quadrature, using the latter to efficiently approximate the intractable integral in the variational objective. Our method produces both a nonparametric approximation of the posterior distribution and an approximate lower bound of the model evidence, useful for model selection. We demonstrate VBMC both on several synthetic likelihoods and on a neuronal model with data from real neurons. Across all tested problems and dimensions (up to D = 10), VBMC performs consistently well in reconstructing the posterior and the model evidence with a limited budget of likelihood evaluations, unlike other methods that work only in very low dimensions. Our framework shows great promise as a novel tool for posterior and model inference with expensive, black-box likelihoods.
Cite
Text
Acerbi. "Variational Bayesian Monte Carlo." Neural Information Processing Systems, 2018.Markdown
[Acerbi. "Variational Bayesian Monte Carlo." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/acerbi2018neurips-variational/)BibTeX
@inproceedings{acerbi2018neurips-variational,
title = {{Variational Bayesian Monte Carlo}},
author = {Acerbi, Luigi},
booktitle = {Neural Information Processing Systems},
year = {2018},
pages = {8213-8223},
url = {https://mlanthology.org/neurips/2018/acerbi2018neurips-variational/}
}