Scalable Global Optimization via Local Bayesian Optimization

Abstract

Bayesian optimization has recently emerged as a popular method for the sample-efficient optimization of expensive black-box functions. However, the application to high-dimensional problems with several thousand observations remains challenging, and on difficult problems Bayesian optimization is often not competitive with other paradigms. In this paper we take the view that this is due to the implicit homogeneity of the global probabilistic models and an overemphasized exploration that results from global acquisition. This motivates the design of a local probabilistic approach for global optimization of large-scale high-dimensional problems. We propose the TuRBO algorithm that fits a collection of local models and performs a principled global allocation of samples across these models via an implicit bandit approach. A comprehensive evaluation demonstrates that TuRBO outperforms state-of-the-art methods from machine learning and operations research on problems spanning reinforcement learning, robotics, and the natural sciences.

Cite

Text

Eriksson et al. "Scalable Global Optimization via Local Bayesian Optimization." Neural Information Processing Systems, 2019.

Markdown

[Eriksson et al. "Scalable Global Optimization via Local Bayesian Optimization." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/eriksson2019neurips-scalable/)

BibTeX

@inproceedings{eriksson2019neurips-scalable,
  title     = {{Scalable Global Optimization via Local Bayesian Optimization}},
  author    = {Eriksson, David and Pearce, Michael and Gardner, Jacob and Turner, Ryan D and Poloczek, Matthias},
  booktitle = {Neural Information Processing Systems},
  year      = {2019},
  pages     = {5496-5507},
  url       = {https://mlanthology.org/neurips/2019/eriksson2019neurips-scalable/}
}