Bandit Convex Optimization: \(\sqrt{T}\) Regret in One Dimension

Abstract

We analyze the minimax regret of the adversarial bandit convex optimization problem. Focusing on the one-dimensional case, we prove that the minimax regret is $\widetilde\Theta(\sqrt{T})$ and partially resolve a decade-old open problem. Our analysis is non-constructive, as we do not present a concrete algorithm that attains this regret rate. Instead, we use minimax duality to reduce the problem to a Bayesian setting, where the convex loss functions are drawn from a worst-case distribution, and then we solve the Bayesian version of the problem with a variant of Thompson Sampling. Our analysis features a novel use of convexity, formalized as a "local-to-global" property of convex functions, that may be of independent interest.

Cite

Text

Bubeck et al. "Bandit Convex Optimization: \(\sqrt{T}\) Regret in One Dimension." Annual Conference on Computational Learning Theory, 2015.

Markdown

[Bubeck et al. "Bandit Convex Optimization: \(\sqrt{T}\) Regret in One Dimension." Annual Conference on Computational Learning Theory, 2015.](https://mlanthology.org/colt/2015/bubeck2015colt-bandit/)

BibTeX

@inproceedings{bubeck2015colt-bandit,
  title     = {{Bandit Convex Optimization: \(\sqrt{T}\) Regret in One Dimension}},
  author    = {Bubeck, Sébastien and Dekel, Ofer and Koren, Tomer and Peres, Yuval},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {2015},
  pages     = {266-278},
  url       = {https://mlanthology.org/colt/2015/bubeck2015colt-bandit/}
}