Bandit Convex Optimization: Towards Tight Bounds

Abstract

Bandit Convex Optimization (BCO) is a fundamental framework for decision making under uncertainty, which generalizes many problems from the realm of online and statistical learning. While the special case of linear cost functions is well understood, a gap on the attainable regret for BCO with nonlinear losses remains an important open question. In this paper we take a step towards understanding the best attainable regret bounds for BCO: we give an efficient and near-optimal regret algorithm for BCO with strongly-convex and smooth loss functions. In contrast to previous works on BCO that use time invariant exploration schemes, our method employs an exploration scheme that shrinks with time.

Cite

Text

Hazan and Levy. "Bandit Convex Optimization: Towards Tight Bounds." Neural Information Processing Systems, 2014.

Markdown

[Hazan and Levy. "Bandit Convex Optimization: Towards Tight Bounds." Neural Information Processing Systems, 2014.](https://mlanthology.org/neurips/2014/hazan2014neurips-bandit/)

BibTeX

@inproceedings{hazan2014neurips-bandit,
  title     = {{Bandit Convex Optimization: Towards Tight Bounds}},
  author    = {Hazan, Elad and Levy, Kfir},
  booktitle = {Neural Information Processing Systems},
  year      = {2014},
  pages     = {784-792},
  url       = {https://mlanthology.org/neurips/2014/hazan2014neurips-bandit/}
}