Distributionally Robust Bayesian Quadrature Optimization

Abstract

Bayesian quadrature optimization (BQO) maximizes the expectation of an expensive black-box integrand taken over a known probability distribution. In this work, we study BQO under distributional uncertainty in which the underlying probability distribution is unknown except for a limited set of its i.i.d samples. A standard BQO approach maximizes the Monte Carlo estimate of the true expected objective given the fixed sample set. Though Monte Carlo estimate is unbiased, it has high variance given a small set of samples; thus can result in a spurious objective function. We adopt the distributionally robust optimization perspective to this problem by maximizing the expected objective under the most adversarial distribution. In particular, we propose a novel posterior sampling based algorithm, namely distributionally robust BQO (DRBQO) for this purpose. We demonstrate the empirical effectiveness of our proposed framework in synthetic and real-world problems, and characterize its theoretical convergence via Bayesian regret.

Cite

Text

Nguyen et al. "Distributionally Robust Bayesian Quadrature Optimization." Artificial Intelligence and Statistics, 2020.

Markdown

[Nguyen et al. "Distributionally Robust Bayesian Quadrature Optimization." Artificial Intelligence and Statistics, 2020.](https://mlanthology.org/aistats/2020/nguyen2020aistats-distributionally/)

BibTeX

@inproceedings{nguyen2020aistats-distributionally,
  title     = {{Distributionally Robust Bayesian Quadrature Optimization}},
  author    = {Nguyen, Thanh and Gupta, Sunil and Ha, Huong and Rana, Santu and Venkatesh, Svetha},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2020},
  pages     = {1921-1931},
  volume    = {108},
  url       = {https://mlanthology.org/aistats/2020/nguyen2020aistats-distributionally/}
}