Optimistic Optimization of a Brownian

Abstract

We address the problem of optimizing a Brownian motion. We consider a (random) realization $W$ of a Brownian motion with input space in $[0,1]$. Given $W$, our goal is to return an $\epsilon$-approximation of its maximum using the smallest possible number of function evaluations, the sample complexity of the algorithm. We provide an algorithm with sample complexity of order $\log^2(1/\epsilon)$. This improves over previous results of Al-Mharmah and Calvin (1996) and Calvin et al. (2017) which provided only polynomial rates. Our algorithm is adaptive---each query depends on previous values---and is an instance of the optimism-in-the-face-of-uncertainty principle.

Cite

Text

Grill et al. "Optimistic Optimization of a Brownian." Neural Information Processing Systems, 2018.

Markdown

[Grill et al. "Optimistic Optimization of a Brownian." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/grill2018neurips-optimistic/)

BibTeX

@inproceedings{grill2018neurips-optimistic,
  title     = {{Optimistic Optimization of a Brownian}},
  author    = {Grill, Jean-Bastien and Valko, Michal and Munos, Remi},
  booktitle = {Neural Information Processing Systems},
  year      = {2018},
  pages     = {3005-3014},
  url       = {https://mlanthology.org/neurips/2018/grill2018neurips-optimistic/}
}