No-Regret Approximate Inference via Bayesian Optimisation

Abstract

We consider Bayesian inference problems where the likelihood function is either expensive to evaluate or only available via noisy estimates. This setting encompasses application scenarios involving, for example, large datasets or models whose likelihood evaluations require expensive simulations. We formulate this problem within a Bayesian optimisation framework over a space of probability distributions and derive an upper confidence bound (UCB) algorithm to propose non-parametric distribution candidates. The algorithm is designed to minimise regret, which is defined as the Kullback-Leibler divergence with respect to the true posterior in this case. Equipped with a Gaussian process surrogate model, we show that the resulting UCB algorithm achieves asymptotically no regret. The method can be easily implemented as a batch Bayesian optimisation algorithm whose point evaluations are selected via Markov chain Monte Carlo. Experimental results demonstrate the method’s performance on inference problems.

Cite

Text

Oliveira et al. "No-Regret Approximate Inference via Bayesian Optimisation." Uncertainty in Artificial Intelligence, 2021.

Markdown

[Oliveira et al. "No-Regret Approximate Inference via Bayesian Optimisation." Uncertainty in Artificial Intelligence, 2021.](https://mlanthology.org/uai/2021/oliveira2021uai-noregret/)

BibTeX

@inproceedings{oliveira2021uai-noregret,
  title     = {{No-Regret Approximate Inference via Bayesian Optimisation}},
  author    = {Oliveira, Rafael and Ott, Lionel and Ramos, Fabio},
  booktitle = {Uncertainty in Artificial Intelligence},
  year      = {2021},
  pages     = {2082-2092},
  volume    = {161},
  url       = {https://mlanthology.org/uai/2021/oliveira2021uai-noregret/}
}