Beating Stochastic and Adversarial Semi-Bandits Optimally and Simultaneously
Abstract
We develop the first general semi-bandit algorithm that simultaneously achieves $\mathcal{O}(\log T)$ regret for stochastic environments and $\mathcal{O}(\sqrt{T})$ regret for adversarial environments without knowledge of the regime or the number of rounds $T$. The leading problem-dependent constants of our bounds are not only optimal in some worst-case sense studied previously, but also optimal for two concrete instances of semi-bandit problems. Our algorithm and analysis extend the recent work of (Zimmert & Seldin, 2019) for the special case of multi-armed bandits, but importantly requires a novel hybrid regularizer designed specifically for semi-bandit. Experimental results on synthetic data show that our algorithm indeed performs well uniformly over different environments. We finally provide a preliminary extension of our results to the full bandit feedback.
Cite
Text
Zimmert et al. "Beating Stochastic and Adversarial Semi-Bandits Optimally and Simultaneously." International Conference on Machine Learning, 2019.Markdown
[Zimmert et al. "Beating Stochastic and Adversarial Semi-Bandits Optimally and Simultaneously." International Conference on Machine Learning, 2019.](https://mlanthology.org/icml/2019/zimmert2019icml-beating/)BibTeX
@inproceedings{zimmert2019icml-beating,
title = {{Beating Stochastic and Adversarial Semi-Bandits Optimally and Simultaneously}},
author = {Zimmert, Julian and Luo, Haipeng and Wei, Chen-Yu},
booktitle = {International Conference on Machine Learning},
year = {2019},
pages = {7683-7692},
volume = {97},
url = {https://mlanthology.org/icml/2019/zimmert2019icml-beating/}
}