On the Convergence of Single-Timescale Actor-Critic
Abstract
We analyze the global convergence of the single-timescale actor-critic (AC) algorithm for the infinite-horizon discounted Markov Decision Processes (MDPs) with finite state spaces. To this end, we introduce an elegant analytical framework for handling complex, coupled recursions inherent in the algorithm. Leveraging this framework, we establish that the algorithm converges to an $\epsilon$-close \textbf{globally optimal} policy with a sample complexity of $ O(\epsilon^{-3}) $. This significantly improves upon the existing complexity of $O(\epsilon^{-2})$ to achieve $\epsilon$-close \textbf{stationary policy}, which is equivalent to the complexity of $O(\epsilon^{-4})$ to achieve $\epsilon$-close \textbf{globally optimal} policy using gradient domination lemma. Furthermore, we demonstrate that to achieve this improvement, the step sizes for both the actor and critic must decay as $ O(k^{-\frac{2}{3}}) $ with iteration $k$, diverging from the conventional $O(k^{-\frac{1}{2}}) $ rates commonly used in (non)convex optimization.
Cite
Text
Kumar et al. "On the Convergence of Single-Timescale Actor-Critic." Advances in Neural Information Processing Systems, 2025.Markdown
[Kumar et al. "On the Convergence of Single-Timescale Actor-Critic." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/kumar2025neurips-convergence/)BibTeX
@inproceedings{kumar2025neurips-convergence,
title = {{On the Convergence of Single-Timescale Actor-Critic}},
author = {Kumar, Navdeep and Agrawal, Priyank and Ramponi, Giorgia and Levy, Kfir Yehuda and Mannor, Shie},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/kumar2025neurips-convergence/}
}