Gaussian Process Bandit Optimisation with Multi-Fidelity Evaluations

Abstract

In many scientific and engineering applications, we are tasked with the optimisation of an expensive to evaluate black box function $\func$. Traditional methods for this problem assume just the availability of this single function. However, in many cases, cheap approximations to $\func$ may be obtainable. For example, the expensive real world behaviour of a robot can be approximated by a cheap computer simulation. We can use these approximations to eliminate low function value regions cheaply and use the expensive evaluations of $\func$ in a small but promising region and speedily identify the optimum. We formalise this task as a \emph{multi-fidelity} bandit problem where the target function and its approximations are sampled from a Gaussian process. We develop \mfgpucb, a novel method based on upper confidence bound techniques. In our theoretical analysis we demonstrate that it exhibits precisely the above behaviour, and achieves better regret than strategies which ignore multi-fidelity information. \mfgpucbs outperforms such naive strategies and other multi-fidelity methods on several synthetic and real experiments.

Cite

Text

Kandasamy et al. "Gaussian Process Bandit Optimisation with Multi-Fidelity Evaluations." Neural Information Processing Systems, 2016.

Markdown

[Kandasamy et al. "Gaussian Process Bandit Optimisation with Multi-Fidelity Evaluations." Neural Information Processing Systems, 2016.](https://mlanthology.org/neurips/2016/kandasamy2016neurips-gaussian/)

BibTeX

@inproceedings{kandasamy2016neurips-gaussian,
  title     = {{Gaussian Process Bandit Optimisation with Multi-Fidelity Evaluations}},
  author    = {Kandasamy, Kirthevasan and Dasarathy, Gautam and Oliva, Junier B and Schneider, Jeff and Poczos, Barnabas},
  booktitle = {Neural Information Processing Systems},
  year      = {2016},
  pages     = {992-1000},
  url       = {https://mlanthology.org/neurips/2016/kandasamy2016neurips-gaussian/}
}