Long-Run Behaviour of Multi-Fidelity Bayesian Optimisation

Abstract

Multi-fidelity Bayesian Optimisation (MFBO) has been shown to generally converge faster than single-fidelity Bayesian Optimisation (SFBO) (\cite{poloczek2017multi}). Inspired by recent benchmark papers, we are investigating the long-run behaviour of MFBO, based on observations in the literature that it might under-perform in certain scenarios (\cite{mikkola2023multi}, \cite{eggensperger2021hpobench}). An under-performance of MBFO in the long-run could significantly undermine its application to many research tasks, especially when we are not able to identify when the under-performance begins, and other BO algorithms would have performed better. We create a simple benchmark study, showcase empirical results and discuss scenarios, concluding with inconclusive results.

Cite

Text

Dovonon and Zeitler. "Long-Run Behaviour of Multi-Fidelity Bayesian Optimisation." NeurIPS 2023 Workshops: ReALML, 2023.

Markdown

[Dovonon and Zeitler. "Long-Run Behaviour of Multi-Fidelity Bayesian Optimisation." NeurIPS 2023 Workshops: ReALML, 2023.](https://mlanthology.org/neuripsw/2023/dovonon2023neuripsw-longrun/)

BibTeX

@inproceedings{dovonon2023neuripsw-longrun,
  title     = {{Long-Run Behaviour of Multi-Fidelity Bayesian Optimisation}},
  author    = {Dovonon, Gbetondji Jean-Sebastien and Zeitler, Jakob},
  booktitle = {NeurIPS 2023 Workshops: ReALML},
  year      = {2023},
  url       = {https://mlanthology.org/neuripsw/2023/dovonon2023neuripsw-longrun/}
}