CAMO: Convergence-Aware Multi-Fidelity Bayesian Optimization

Abstract

Existing Multi-fidelity Bayesian Optimization (MFBO) methods ignore the convergence behavior of the multi-fidelity surrogate as the fidelity increases, leading to inefficient exploration and suboptimal performance. We introduce CAMO (Convergence-Aware Multi-fidelity Optimization), a principled framework based on Linear Fidelity Differential Equations (LFiDEs) that explicitly encodes convergence of fidelity-indexed outputs and employs a closed-form nonstationary kernel. We rigorously prove the existence and pointwise/uniform convergence to the high fidelity surrogate under mild restrictions and provide new convergence results for general FiDEs using smooth, non-smooth and even non-convex Lyapunov functions, establishing a bridge between MFBO and the theory of subgradient flows in non-smooth optimisation theory. Combined with a fidelity-aware acquisition function, CAMO outperforms state-of-the-art MFBO methods on a majority of synthetic and real-world benchmarks, with up to a four-fold improvement in optimisation performance and a dramatic speed-up in convergence. CAMO offers a tractable and theoretically grounded approach to convergence-aware MFBO.

Cite

Text

Xing et al. "CAMO: Convergence-Aware Multi-Fidelity Bayesian Optimization." Advances in Neural Information Processing Systems, 2025.

Markdown

[Xing et al. "CAMO: Convergence-Aware Multi-Fidelity Bayesian Optimization." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/xing2025neurips-camo/)

BibTeX

@inproceedings{xing2025neurips-camo,
  title     = {{CAMO: Convergence-Aware Multi-Fidelity Bayesian Optimization}},
  author    = {Xing, Wei W. and Zhenjie, Lu and Shah, Akeel},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/xing2025neurips-camo/}
}