Multi-Scale Exploration of Convex Functions and Bandit Convex Optimization

Abstract

We construct a new map from a convex function to a distribution on its domain, with the property that this distribution is a multi-scale exploration of the function. We use this map to solve a decade-old open problem in adversarial bandit convex optimization by showing that the minimax regret for this problem is $\tilde{O}(\mathrm{poly}(n) \sqrt{T})$, where $n$ is the dimension and $T$ the number of rounds. This bound is obtained by studying the dual Bayesian maximin regret via the information ratio analysis of Russo and Van Roy, and then using the multi-scale exploration to solve the Bayesian problem.

Cite

Text

Bubeck and Eldan. "Multi-Scale Exploration of Convex Functions and Bandit Convex Optimization." Annual Conference on Computational Learning Theory, 2016.

Markdown

[Bubeck and Eldan. "Multi-Scale Exploration of Convex Functions and Bandit Convex Optimization." Annual Conference on Computational Learning Theory, 2016.](https://mlanthology.org/colt/2016/bubeck2016colt-multi/)

BibTeX

@inproceedings{bubeck2016colt-multi,
  title     = {{Multi-Scale Exploration of Convex Functions and Bandit Convex Optimization}},
  author    = {Bubeck, Sébastien and Eldan, Ronen},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {2016},
  pages     = {583-589},
  url       = {https://mlanthology.org/colt/2016/bubeck2016colt-multi/}
}