Contextual Bandits with Continuous Actions: Smoothing, Zooming, and Adapting
Abstract
We study contextual bandit learning for any competitor policy class and continuous action space. We obtain two qualitatively different regret bounds: one competes with a smoothed version of the policy class under no continuity assumptions, while the other requires standard Lipschitz assumptions. Both bounds exhibit data-dependent “zooming" behavior and, with no tuning, yield improved guarantees for benign problems. We also study adapting to unknown smoothness parameters, establishing a price-of-adaptivity and deriving optimal adaptive algorithms that require no additional information.
Cite
Text
Krishnamurthy et al. "Contextual Bandits with Continuous Actions: Smoothing, Zooming, and Adapting." Conference on Learning Theory, 2019.Markdown
[Krishnamurthy et al. "Contextual Bandits with Continuous Actions: Smoothing, Zooming, and Adapting." Conference on Learning Theory, 2019.](https://mlanthology.org/colt/2019/krishnamurthy2019colt-contextual/)BibTeX
@inproceedings{krishnamurthy2019colt-contextual,
title = {{Contextual Bandits with Continuous Actions: Smoothing, Zooming, and Adapting}},
author = {Krishnamurthy, Akshay and Langford, John and Slivkins, Aleksandrs and Zhang, Chicheng},
booktitle = {Conference on Learning Theory},
year = {2019},
pages = {2025-2027},
volume = {99},
url = {https://mlanthology.org/colt/2019/krishnamurthy2019colt-contextual/}
}