A Statistical Analysis of Polyak-Ruppert Averaged Q-Learning

Abstract

We study Q-learning with Polyak-Ruppert averaging (a.k.a., averaged Q-learning) in a discounted markov decision process in synchronous and tabular settings. Under a Lipschitz condition, we establish a functional central limit theorem for the averaged iteration $\bar{\mathbf{Q}}_T$ and show that its standardized partial-sum process converges weakly to a rescaled Brownian motion. The FCLT implies a fully online inference method for reinforcement learning. Furthermore, we show that $\bar{\mathbf{Q}}_T$ is the regular asymptotically linear (RAL) estimator for the optimal Q-value function $\mathbf{Q}^*$ that has the most efficient influence function. We present a nonasymptotic analysis for the $\ell_{\infty}$ error, $\mathbb{E}\|\bar{\mathbf{Q}}_T-\mathbf{Q}^*\|_{\infty}$, showing that it matches the instance-dependent lower bound for polynomial step sizes. Similar results are provided for entropy-regularized Q-Learning without the Lipschitz condition.

Cite

Text

Li et al. "A Statistical Analysis of Polyak-Ruppert Averaged Q-Learning." Artificial Intelligence and Statistics, 2023.

Markdown

[Li et al. "A Statistical Analysis of Polyak-Ruppert Averaged Q-Learning." Artificial Intelligence and Statistics, 2023.](https://mlanthology.org/aistats/2023/li2023aistats-statistical/)

BibTeX

@inproceedings{li2023aistats-statistical,
  title     = {{A Statistical Analysis of Polyak-Ruppert Averaged Q-Learning}},
  author    = {Li, Xiang and Yang, Wenhao and Liang, Jiadong and Zhang, Zhihua and Jordan, Michael I.},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2023},
  pages     = {2207-2261},
  volume    = {206},
  url       = {https://mlanthology.org/aistats/2023/li2023aistats-statistical/}
}