A Sharper Global Convergence Analysis for Average Reward Reinforcement Learning via an Actor-Critic Approach

Abstract

This work examines average-reward reinforcement learning with general policy parametrization. Existing state-of-the-art (SOTA) guarantees for this problem are either suboptimal or hindered by several challenges, including poor scalability with respect to the size of the state-action space, high iteration complexity, and a significant dependence on knowledge of mixing times and hitting times. To address these limitations, we propose a Multi-level Monte Carlo-based Natural Actor-Critic (MLMC-NAC) algorithm. Our work is the first to achieve a global convergence rate of $\tilde{\mathcal{O}}(1/\sqrt{T})$ for average-reward Markov Decision Processes (MDPs) (where $T$ is the horizon length), using an Actor-Critic approach. Moreover, the convergence rate does not scale with the size of the state space, therefore even being applicable to infinite state spaces.

Cite

Text

Ganesh et al. "A Sharper Global Convergence Analysis for Average Reward Reinforcement Learning via an Actor-Critic Approach." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Ganesh et al. "A Sharper Global Convergence Analysis for Average Reward Reinforcement Learning via an Actor-Critic Approach." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/ganesh2025icml-sharper/)

BibTeX

@inproceedings{ganesh2025icml-sharper,
  title     = {{A Sharper Global Convergence Analysis for Average Reward Reinforcement Learning via an Actor-Critic Approach}},
  author    = {Ganesh, Swetha and Mondal, Washim Uddin and Aggarwal, Vaneet},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {18206-18227},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/ganesh2025icml-sharper/}
}