Provably Sample Efficient Reinforcement Learning in Competitive Linear Quadratic Systems
Abstract
We study the infinite-horizon zero-sum linear quadratic (LQ) games, where the state transition is linear and the cost function is quadratic in states and actions of two players. In particular, we develop an adaptive algorithm that can properly trade off between exploration and exploitation of the unknown environment in LQ games based on the optimism-in-face-of-uncertainty (OFU) principle. We show that (i) the average regret of player $1$ (the min player) can be bounded by $\widetilde{\mathcal{O}}(1/\sqrt{T})$ against any fixed linear policy of the adversary (player $2$); (ii) the average cost of player $1$ also converges to the value of the game at a sublinear $\widetilde{\mathcal{O}}(1/\sqrt{T})$ rate if the adversary plays adaptively against player $1$ with the same algorithm, i.e., with self-play. To the best of our knowledge, this is the first time that a probably sample efficient reinforcement learning algorithm is proposed for zero-sum LQ games.
Cite
Text
Zhang et al. "Provably Sample Efficient Reinforcement Learning in Competitive Linear Quadratic Systems." Proceedings of the 3rd Conference on Learning for Dynamics and Control, 2021.Markdown
[Zhang et al. "Provably Sample Efficient Reinforcement Learning in Competitive Linear Quadratic Systems." Proceedings of the 3rd Conference on Learning for Dynamics and Control, 2021.](https://mlanthology.org/l4dc/2021/zhang2021l4dc-provably/)BibTeX
@inproceedings{zhang2021l4dc-provably,
title = {{Provably Sample Efficient Reinforcement Learning in Competitive Linear Quadratic Systems}},
author = {Zhang, Jingwei and Yang, Zhuoran and Zhou, Zhengyuan and Wang, Zhaoran},
booktitle = {Proceedings of the 3rd Conference on Learning for Dynamics and Control},
year = {2021},
pages = {597-598},
volume = {144},
url = {https://mlanthology.org/l4dc/2021/zhang2021l4dc-provably/}
}