Lipschitz Bandits Without the Lipschitz Constant
Abstract
We consider the setting of stochastic bandit problems with a continuum of arms indexed by [0, 1]d. We first point out that the strategies considered so far in the literature only provided theoretical guarantees of the form: given some tuning parameters, the regret is small with respect to a class of environments that depends on these parameters. This is however not the right perspective, as it is the strategy that should adapt to the specific bandit environment at hand, and not the other way round. Put differently, an adaptation issue is raised. We solve it for the special case of environments whose mean-payoff functions are globally Lipschitz. More precisely, we show that the minimax optimal orders of magnitude Ld/(d+2) T(d+1)/(d+2) of the regret bound over T time instances against an environment whose mean-payoff function f is Lipschitz with constant L can be achieved without knowing L or T in advance. This is in contrast to all previously known strategies, which require to some extent the knowledge of L to achieve this performance guarantee.
Cite
Text
Bubeck et al. "Lipschitz Bandits Without the Lipschitz Constant." International Conference on Algorithmic Learning Theory, 2011. doi:10.1007/978-3-642-24412-4_14Markdown
[Bubeck et al. "Lipschitz Bandits Without the Lipschitz Constant." International Conference on Algorithmic Learning Theory, 2011.](https://mlanthology.org/alt/2011/bubeck2011alt-lipschitz/) doi:10.1007/978-3-642-24412-4_14BibTeX
@inproceedings{bubeck2011alt-lipschitz,
title = {{Lipschitz Bandits Without the Lipschitz Constant}},
author = {Bubeck, Sébastien and Stoltz, Gilles and Yu, Jia Yuan},
booktitle = {International Conference on Algorithmic Learning Theory},
year = {2011},
pages = {144-158},
doi = {10.1007/978-3-642-24412-4_14},
url = {https://mlanthology.org/alt/2011/bubeck2011alt-lipschitz/}
}