Understanding High-Dimensional Bayesian Optimization

Abstract

Recent work reported that simple Bayesian optimization (BO) methods perform well for high-dimensional real-world tasks, seemingly contradicting prior work and tribal knowledge. This paper investigates why. We identify underlying challenges that arise in high-dimensional BO and explain why recent methods succeed. Our empirical analysis shows that vanishing gradients caused by Gaussian process (GP) initialization schemes play a major role in the failures of high-dimensional Bayesian optimization (HDBO) and that methods that promote local search behaviors are better suited for the task. We find that maximum likelihood estimation (MLE) of GP length scales suffices for state-of-the-art performance. Based on this, we propose a simple variant of MLE called MSR that leverages these findings to achieve state-of-the-art performance on a comprehensive set of real-world applications. We present targeted experiments to illustrate and confirm our findings.

Cite

Text

Papenmeier et al. "Understanding High-Dimensional Bayesian Optimization." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Papenmeier et al. "Understanding High-Dimensional Bayesian Optimization." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/papenmeier2025icml-understanding/)

BibTeX

@inproceedings{papenmeier2025icml-understanding,
  title     = {{Understanding High-Dimensional Bayesian Optimization}},
  author    = {Papenmeier, Leonard and Poloczek, Matthias and Nardi, Luigi},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {47902-47923},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/papenmeier2025icml-understanding/}
}