Multiarmed Bandits with Limited Expert Advice
Abstract
We consider the problem of minimizing regret in the setting of advice-efficient multiarmed bandits with expert advice. We give an algorithm for the setting of K arms and N experts out of which we are allowed to query and use only M experts’ advice in each round, which has a regret bound 1 of ~ O q minfK;MgN M T after T rounds. We also prove that any algorithm for this problem must have expected regret at least ~ q minfK;MgN M T , thus showing that our upper bound is nearly tight. This solves the COLT 2013 open problem of Seldin et al. (2013).
Cite
Text
Kale. "Multiarmed Bandits with Limited Expert Advice." Annual Conference on Computational Learning Theory, 2014.Markdown
[Kale. "Multiarmed Bandits with Limited Expert Advice." Annual Conference on Computational Learning Theory, 2014.](https://mlanthology.org/colt/2014/kale2014colt-multiarmed/)BibTeX
@inproceedings{kale2014colt-multiarmed,
title = {{Multiarmed Bandits with Limited Expert Advice}},
author = {Kale, Satyen},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2014},
pages = {107-122},
url = {https://mlanthology.org/colt/2014/kale2014colt-multiarmed/}
}