Contextual Bandits with Continuous Actions: Smoothing, Zooming, and Adapting

Abstract

We study contextual bandit learning with an abstract policy class and continuous action space. We obtain two qualitatively different regret bounds: one competes with a smoothed version of the policy class under no continuity assumptions, while the other requires standard Lipschitz assumptions. Both bounds exhibit data-dependent “zooming” behavior and, with no tuning, yield improved guarantees for benign problems. We also study adapting to unknown smoothness parameters, establishing a price-of-adaptivity and deriving optimal adaptive algorithms that require no additional information.

Cite

Text

Krishnamurthy et al. "Contextual Bandits with Continuous Actions: Smoothing, Zooming, and Adapting." Journal of Machine Learning Research, 2020.

Markdown

[Krishnamurthy et al. "Contextual Bandits with Continuous Actions: Smoothing, Zooming, and Adapting." Journal of Machine Learning Research, 2020.](https://mlanthology.org/jmlr/2020/krishnamurthy2020jmlr-contextual/)

BibTeX

@article{krishnamurthy2020jmlr-contextual,
  title     = {{Contextual Bandits with Continuous Actions: Smoothing, Zooming, and Adapting}},
  author    = {Krishnamurthy, Akshay and Langford, John and Slivkins, Aleksandrs and Zhang, Chicheng},
  journal   = {Journal of Machine Learning Research},
  year      = {2020},
  pages     = {1-45},
  volume    = {21},
  url       = {https://mlanthology.org/jmlr/2020/krishnamurthy2020jmlr-contextual/}
}