A Unified Stochastic Gradient Approach to Designing Bayesian-Optimal Experiments

Abstract

We introduce a fully stochastic gradient based approach to Bayesian optimal experimental design (BOED). Our approach utilizes variational lower bounds on the expected information gain (EIG) of an experiment that can be simultaneously optimized with respect to both the variational and design parameters. This allows the design process to be carried out through a single unified stochastic gradient ascent procedure, in contrast to existing approaches that typically construct a pointwise EIG estimator, before passing this estimator to a separate optimizer. We provide a number of different variational objectives including the novel adaptive contrastive estimation (ACE) bound. Finally, we show that our gradient-based approaches are able to provide effective design optimization in substantially higher dimensional settings than existing approaches.

Cite

Text

Foster et al. "A Unified Stochastic Gradient Approach to Designing Bayesian-Optimal Experiments." Artificial Intelligence and Statistics, 2020.

Markdown

[Foster et al. "A Unified Stochastic Gradient Approach to Designing Bayesian-Optimal Experiments." Artificial Intelligence and Statistics, 2020.](https://mlanthology.org/aistats/2020/foster2020aistats-unified/)

BibTeX

@inproceedings{foster2020aistats-unified,
  title     = {{A Unified Stochastic Gradient Approach to Designing Bayesian-Optimal Experiments}},
  author    = {Foster, Adam and Jankowiak, Martin and O’Meara, Matthew and Teh, Yee Whye and Rainforth, Tom},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2020},
  pages     = {2959-2969},
  volume    = {108},
  url       = {https://mlanthology.org/aistats/2020/foster2020aistats-unified/}
}