Tight Regret Bounds for Bayesian Optimization in One Dimension

Abstract

We consider the problem of Bayesian optimization (BO) in one dimension, under a Gaussian process prior and Gaussian sampling noise. We provide a theoretical analysis showing that, under fairly mild technical assumptions on the kernel, the best possible cumulative regret up to time $T$ behaves as $\Omega(\sqrt{T})$ and $O(\sqrt{T\log T})$. This gives a tight characterization up to a $\sqrt{\log T}$ factor, and includes the first non-trivial lower bound for noisy BO. Our assumptions are satisfied, for example, by the squared exponential and Matérn-$\nu$ kernels, with the latter requiring $\nu > 2$. Our results certify the near-optimality of existing bounds (Srinivas et al., 2009) for the SE kernel, while proving them to be strictly suboptimal for the Matérn kernel with $\nu > 2$.

Cite

Text

Scarlett. "Tight Regret Bounds for Bayesian Optimization in One Dimension." International Conference on Machine Learning, 2018.

Markdown

[Scarlett. "Tight Regret Bounds for Bayesian Optimization in One Dimension." International Conference on Machine Learning, 2018.](https://mlanthology.org/icml/2018/scarlett2018icml-tight/)

BibTeX

@inproceedings{scarlett2018icml-tight,
  title     = {{Tight Regret Bounds for Bayesian Optimization in One Dimension}},
  author    = {Scarlett, Jonathan},
  booktitle = {International Conference on Machine Learning},
  year      = {2018},
  pages     = {4500-4508},
  volume    = {80},
  url       = {https://mlanthology.org/icml/2018/scarlett2018icml-tight/}
}