Bandit Linear Control

Abstract

We consider the problem of controlling a known linear dynamical system under stochastic noise, adversarially chosen costs, and bandit feedback. Unlike the full feedback setting where the entire cost function is revealed after each decision, here only the cost incurred by the learner is observed. We present a new and efficient algorithm that, for strongly convex and smooth costs, obtains regret that grows with the square root of the time horizon T. We also give extensions of this result to general convex, possibly non-smooth costs, and to non-stochastic system noise. A key component of our algorithm is a new technique for addressing bandit optimization of loss functions with memory.

Cite

Text

Cassel and Koren. "Bandit Linear Control." Neural Information Processing Systems, 2020.

Markdown

[Cassel and Koren. "Bandit Linear Control." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/cassel2020neurips-bandit/)

BibTeX

@inproceedings{cassel2020neurips-bandit,
  title     = {{Bandit Linear Control}},
  author    = {Cassel, Asaf and Koren, Tomer},
  booktitle = {Neural Information Processing Systems},
  year      = {2020},
  url       = {https://mlanthology.org/neurips/2020/cassel2020neurips-bandit/}
}