Logarithmic Regret for Episodic Continuous-Time Linear-Quadratic Reinforcement Learning over a Finite-Time Horizon

Abstract

We study finite-time horizon continuous-time linear-quadratic reinforcement learning problems in an episodic setting, where both the state and control coefficients are unknown to the controller. We first propose a least-squares algorithm based on continuous-time observations and controls, and establish a logarithmic regret bound of magnitude $\mathcal{O}((\ln M)(\ln\ln M) )$, with $M$ being the number of learning episodes. The analysis consists of two components: perturbation analysis, which exploits the regularity and robustness of the associated Riccati differential equation; and parameter estimation error, which relies on sub-exponential properties of continuous-time least-squares estimators. We further propose a practically implementable least-squares algorithm based on discrete-time observations and piecewise constant controls, which achieves similar logarithmic regret with an additional term depending explicitly on the time stepsizes used in the algorithm.

Cite

Text

Basei et al. "Logarithmic Regret for Episodic Continuous-Time Linear-Quadratic Reinforcement Learning over a Finite-Time Horizon." Journal of Machine Learning Research, 2022.

Markdown

[Basei et al. "Logarithmic Regret for Episodic Continuous-Time Linear-Quadratic Reinforcement Learning over a Finite-Time Horizon." Journal of Machine Learning Research, 2022.](https://mlanthology.org/jmlr/2022/basei2022jmlr-logarithmic/)

BibTeX

@article{basei2022jmlr-logarithmic,
  title     = {{Logarithmic Regret for Episodic Continuous-Time Linear-Quadratic Reinforcement Learning over a Finite-Time Horizon}},
  author    = {Basei, Matteo and Guo, Xin and Hu, Anran and Zhang, Yufei},
  journal   = {Journal of Machine Learning Research},
  year      = {2022},
  pages     = {1-34},
  volume    = {23},
  url       = {https://mlanthology.org/jmlr/2022/basei2022jmlr-logarithmic/}
}