An Algorithm with Nearly Optimal Pseudo-Regret for Both Stochastic and Adversarial Bandits
Abstract
We present an algorithm that achieves almost optimal pseudo-regret bounds against adversarial and stochastic bandits. Against adversarial bandits the pseudo-regret is $O(K\sqrt{n \log n})$ and against stochastic bandits the pseudo-regret is $O(\sum_i (\log n)/\Delta_i)$. We also show that no algorithm with $O(\log n)$ pseudo-regret against stochastic bandits can achieve $\tilde{O}(\sqrt{n})$ expected regret against adaptive adversarial bandits. This complements previous results of Bubeck and Slivkins (2012) that show $\tilde{O}(\sqrt{n})$ expected adversarial regret with $O((\log n)^2)$ stochastic pseudo-regret.
Cite
Text
Auer and Chiang. "An Algorithm with Nearly Optimal Pseudo-Regret for Both Stochastic and Adversarial Bandits." Annual Conference on Computational Learning Theory, 2016.Markdown
[Auer and Chiang. "An Algorithm with Nearly Optimal Pseudo-Regret for Both Stochastic and Adversarial Bandits." Annual Conference on Computational Learning Theory, 2016.](https://mlanthology.org/colt/2016/auer2016colt-algorithm/)BibTeX
@inproceedings{auer2016colt-algorithm,
title = {{An Algorithm with Nearly Optimal Pseudo-Regret for Both Stochastic and Adversarial Bandits}},
author = {Auer, Peter and Chiang, Chao-Kai},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2016},
pages = {116-120},
url = {https://mlanthology.org/colt/2016/auer2016colt-algorithm/}
}