Optimistic Optimization of Gaussian Process Samples

Abstract

Bayesian optimization is a popular formalism for global optimization, but its computational costs limit it to expensive-to-evaluate functions. A competing, computationally more effi- cient, global optimization framework is optimistic optimization, which exploits prior knowl- edge about the geometry of the search space in form of a dissimilarity function. We investi- gate to which degree the conceptual advantages of Bayesian Optimization can be combined with the computational efficiency of optimistic optimization. By mapping the kernel to a dissimilarity, we obtain an optimistic optimization algorithm for the Bayesian Optimization setting with a run-time of up to $O(N log N )$. As a high-level take-away we find that, when using stationary kernels on objectives of low evaluation cost, optimistic optimization can be preferable over Bayesian optimization, while for strongly coupled and parametric models, Bayesian optimization can perform much better, even at low evaluation cost. As a concep- tual takeaway, our results demonstrate that balancing exploration and exploitation under Gaussian process assumptions does not require computing a posterior.

Cite

Text

Grosse et al. "Optimistic Optimization of Gaussian Process Samples." Transactions on Machine Learning Research, 2023.

Markdown

[Grosse et al. "Optimistic Optimization of Gaussian Process Samples." Transactions on Machine Learning Research, 2023.](https://mlanthology.org/tmlr/2023/grosse2023tmlr-optimistic/)

BibTeX

@article{grosse2023tmlr-optimistic,
  title     = {{Optimistic Optimization of Gaussian Process Samples}},
  author    = {Grosse, Julia and Zhang, Cheng and Hennig, Philipp},
  journal   = {Transactions on Machine Learning Research},
  year      = {2023},
  url       = {https://mlanthology.org/tmlr/2023/grosse2023tmlr-optimistic/}
}