Single Timescale Actor-Critic Method to Solve the Linear Quadratic Regulator with Convergence Guarantees
Abstract
We propose a single timescale actor-critic algorithm to solve the linear quadratic regulator (LQR) problem. A least squares temporal difference (LSTD) method is applied to the critic and a natural policy gradient method is used for the actor. We give a proof of convergence with sample complexity $\mathcal{O}(\varepsilon^{-1} \log(\varepsilon^{-1})^2)$. The method in the proof is applicable to general single timescale bilevel optimization problems. We also numerically validate our theoretical results on the convergence.
Cite
Text
Zhou and Lu. "Single Timescale Actor-Critic Method to Solve the Linear Quadratic Regulator with Convergence Guarantees." Journal of Machine Learning Research, 2023.Markdown
[Zhou and Lu. "Single Timescale Actor-Critic Method to Solve the Linear Quadratic Regulator with Convergence Guarantees." Journal of Machine Learning Research, 2023.](https://mlanthology.org/jmlr/2023/zhou2023jmlr-single/)BibTeX
@article{zhou2023jmlr-single,
title = {{Single Timescale Actor-Critic Method to Solve the Linear Quadratic Regulator with Convergence Guarantees}},
author = {Zhou, Mo and Lu, Jianfeng},
journal = {Journal of Machine Learning Research},
year = {2023},
pages = {1-34},
volume = {24},
url = {https://mlanthology.org/jmlr/2023/zhou2023jmlr-single/}
}