Adaptive Multi-Fidelity Optimization with Fast Learning Rates

Abstract

In multi-fidelity optimization, biased approximations of varying costs of the target function are available. This paper studies the problem of optimizing a locally smooth function with a limited budget, where the learner has to make a tradeoff between the cost and the bias of these approximations. We first prove lower bounds for the simple regret under different assumptions on the fidelities, based on a cost-to-bias function. We then present the Kometo algorithm which achieves, with additional logarithmic factors, the same rates without any knowledge of the function smoothness and fidelity assumptions, and improves previously proven guarantees. We finally empirically show that our algorithm outperforms previous multi-fidelity optimization methods without the knowledge of problem-dependent parameters.

Cite

Text

Fiegel et al. "Adaptive Multi-Fidelity Optimization with Fast Learning Rates." Artificial Intelligence and Statistics, 2020.

Markdown

[Fiegel et al. "Adaptive Multi-Fidelity Optimization with Fast Learning Rates." Artificial Intelligence and Statistics, 2020.](https://mlanthology.org/aistats/2020/fiegel2020aistats-adaptive/)

BibTeX

@inproceedings{fiegel2020aistats-adaptive,
  title     = {{Adaptive Multi-Fidelity Optimization with Fast Learning Rates}},
  author    = {Fiegel, Côme and Gabillon, Victor and Valko, Michal},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2020},
  pages     = {3493-3502},
  volume    = {108},
  url       = {https://mlanthology.org/aistats/2020/fiegel2020aistats-adaptive/}
}