Why Non-Myopic Bayesian Optimization Is Promising and How Far Should We Look-Ahead? a Study via Rollout
Abstract
Lookahead, also known as non-myopic, Bayesian optimization (BO) aims to find optimal sampling policies through solving a dynamic programming (DP) formulation that maximizes a long-term reward over a rolling horizon. Though promising, lookahead BO faces the risk of error propagation through its increased dependence on a possibly mis-specified model. In this work we focus on the rollout approximation for solving the intractable DP. We first prove the improving nature of rollout in tackling lookahead BO and provide a sufficient condition for the used heuristic to be rollout improving. We then provide both a theoretical and practical guideline to decide on the rolling horizon stagewise. This guideline is built on quantifying the negative effect of a mis-specified model. To illustrate our idea, we provide case studies on both single and multi-information source BO. Empirical results show the advantageous properties of our method over several myopic and non-myopic BO algorithms.
Cite
Text
Yue and AL Kontar. "Why Non-Myopic Bayesian Optimization Is Promising and How Far Should We Look-Ahead? a Study via Rollout." Artificial Intelligence and Statistics, 2020.Markdown
[Yue and AL Kontar. "Why Non-Myopic Bayesian Optimization Is Promising and How Far Should We Look-Ahead? a Study via Rollout." Artificial Intelligence and Statistics, 2020.](https://mlanthology.org/aistats/2020/yue2020aistats-nonmyopic/)BibTeX
@inproceedings{yue2020aistats-nonmyopic,
title = {{Why Non-Myopic Bayesian Optimization Is Promising and How Far Should We Look-Ahead? a Study via Rollout}},
author = {Yue, Xubo and AL Kontar, Raed},
booktitle = {Artificial Intelligence and Statistics},
year = {2020},
pages = {2808-2818},
volume = {108},
url = {https://mlanthology.org/aistats/2020/yue2020aistats-nonmyopic/}
}