Policy-Based Primal-Dual Methods for Concave CMDP with Variance Reduction
Abstract
We study Concave Constrained Markov Decision Processes (Concave CMDPs) where both the objective and constraints are defined as concave functions of the state-action occupancy measure. We propose the Variance-Reduced Primal-Dual Policy Gradient Algorithm (VR-PDPG), which updates the primal variable via policy gradient ascent and the dual variable via projected sub-gradient descent. Despite the challenges posed by the loss of additivity structure and the nonconcave nature of the problem, we establish the global convergence of VR-PDPG by exploiting a form of hidden concavity. In the exact setting, we prove an O(T-1/3) convergence rate for both the average optimality gap and constraint violation, which further improves to O(T-1/2) under strong concavity of the objective in the occupancy measure. In the sample-based setting, we demonstrate that VR-PDPG achieves an O(ε-4) sample complexity for ε-global optimality. Moreover, by incorporating a diminishing pessimistic term into the constraint, we show that VR-PDPG can attain a zero constraint violation without compromising the convergence rate of the optimality gap. Finally, we validate our methods through numerical experiments.
Cite
Text
Ying et al. "Policy-Based Primal-Dual Methods for Concave CMDP with Variance Reduction." Journal of Artificial Intelligence Research, 2025. doi:10.1613/JAIR.1.18129Markdown
[Ying et al. "Policy-Based Primal-Dual Methods for Concave CMDP with Variance Reduction." Journal of Artificial Intelligence Research, 2025.](https://mlanthology.org/jair/2025/ying2025jair-policybased/) doi:10.1613/JAIR.1.18129BibTeX
@article{ying2025jair-policybased,
title = {{Policy-Based Primal-Dual Methods for Concave CMDP with Variance Reduction}},
author = {Ying, Donghao and Guo, Mengzi Amy and Lee, Hyunin and Ding, Yuhao and Lavaei, Javad and Shen, Zuo-Jun Max},
journal = {Journal of Artificial Intelligence Research},
year = {2025},
doi = {10.1613/JAIR.1.18129},
volume = {83},
url = {https://mlanthology.org/jair/2025/ying2025jair-policybased/}
}