Efficient Online Bandit Multiclass Learning with $\tilde{O}(\sqrt{T})$ Regret

Abstract

We present an efficient second-order algorithm with $\tilde{O}(1/\eta \sqrt{T})$ regret for the bandit online multiclass problem. The regret bound holds simultaneously with respect to a family of loss functions parameterized by $\eta$, ranging from hinge loss ($\eta=0$) to squared hinge loss ($\eta=1$). This provides a solution to the open problem of (Abernethy, J. and Rakhlin, A. An efficient bandit algorithm for $\sqrt{T}$-regret in online multiclass prediction? In COLT, 2009). We test our algorithm experimentally, showing that it performs favorably against earlier algorithms.

Cite

Text

Beygelzimer et al. "Efficient Online Bandit Multiclass Learning with $\tilde{O}(\sqrt{T})$ Regret." International Conference on Machine Learning, 2017.

Markdown

[Beygelzimer et al. "Efficient Online Bandit Multiclass Learning with $\tilde{O}(\sqrt{T})$ Regret." International Conference on Machine Learning, 2017.](https://mlanthology.org/icml/2017/beygelzimer2017icml-efficient/)

BibTeX

@inproceedings{beygelzimer2017icml-efficient,
  title     = {{Efficient Online Bandit Multiclass Learning with $\tilde{O}(\sqrt{T})$ Regret}},
  author    = {Beygelzimer, Alina and Orabona, Francesco and Zhang, Chicheng},
  booktitle = {International Conference on Machine Learning},
  year      = {2017},
  pages     = {488-497},
  volume    = {70},
  url       = {https://mlanthology.org/icml/2017/beygelzimer2017icml-efficient/}
}