Global Convergence for Average Reward Constrained MDPs with Primal-Dual Actor Critic Algorithm

Abstract

This paper investigates infinite-horizon average reward Constrained Markov Decision Processes (CMDPs) under general parametrized policies with smooth and bounded policy gradients. We propose a Primal-Dual Natural Actor-Critic algorithm that adeptly manages constraints while ensuring a high convergence rate. In particular, our algorithm achieves global convergence and constraint violation rates of $\tilde{\mathcal{O}}(1/\sqrt{T})$ over a horizon of length $T$ when the mixing time, $\tau_{\mathrm{mix}}$, is known to the learner. In absence of knowledge of $\tau_{\mathrm{mix}}$, the achievable rates change to $\tilde{\mathcal{O}}(1/T^{0.5-\epsilon})$ provided that $T \geq \tilde{\mathcal{O}}\left(\tau_{\mathrm{mix}}^{2/\epsilon}\right)$. Our results match the theoretical lower bound for Markov Decision Processes and establish a new benchmark in the theoretical exploration of average reward CMDPs.

Cite

Text

Xu et al. "Global Convergence for Average Reward Constrained MDPs with Primal-Dual Actor Critic Algorithm." Advances in Neural Information Processing Systems, 2025.

Markdown

[Xu et al. "Global Convergence for Average Reward Constrained MDPs with Primal-Dual Actor Critic Algorithm." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/xu2025neurips-global/)

BibTeX

@inproceedings{xu2025neurips-global,
  title     = {{Global Convergence for Average Reward Constrained MDPs with Primal-Dual Actor Critic Algorithm}},
  author    = {Xu, Yang and Ganesh, Swetha and Mondal, Washim Uddin and Bai, Qinbo and Aggarwal, Vaneet},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/xu2025neurips-global/}
}