Bayesian Optimization with Exponential Convergence

Abstract

This paper presents a Bayesian optimization method with exponential convergence without the need of auxiliary optimization and without the delta-cover sampling. Most Bayesian optimization methods require auxiliary optimization: an additional non-convex global optimization problem, which can be time-consuming and hard to implement in practice. Also, the existing Bayesian optimization method with exponential convergence requires access to the delta-cover sampling, which was considered to be impractical. Our approach eliminates both requirements and achieves an exponential convergence rate.

Cite

Text

Kawaguchi et al. "Bayesian Optimization with Exponential Convergence." Neural Information Processing Systems, 2015.

Markdown

[Kawaguchi et al. "Bayesian Optimization with Exponential Convergence." Neural Information Processing Systems, 2015.](https://mlanthology.org/neurips/2015/kawaguchi2015neurips-bayesian/)

BibTeX

@inproceedings{kawaguchi2015neurips-bayesian,
  title     = {{Bayesian Optimization with Exponential Convergence}},
  author    = {Kawaguchi, Kenji and Kaelbling, Leslie Pack and Lozano-Pérez, Tomás},
  booktitle = {Neural Information Processing Systems},
  year      = {2015},
  pages     = {2809-2817},
  url       = {https://mlanthology.org/neurips/2015/kawaguchi2015neurips-bayesian/}
}