Learning General Parameterized Policies for Infinite Horizon Average Reward Constrained MDPs via Primal-Dual Policy Gradient Algorithm
Abstract
This paper explores the realm of infinite horizon average reward Constrained Markov Decision Processes (CMDPs). To the best of our knowledge, this work is the first to delve into the regret and constraint violation analysis of average reward CMDPs with a general policy parametrization. To address this challenge, we propose a primal dual-based policy gradient algorithm that adeptly manages the constraints while ensuring a low regret guarantee toward achieving a global optimal policy. In particular, our proposed algorithm achieves $\tilde{\mathcal{O}}({T}^{4/5})$ objective regret and $\tilde{\mathcal{O}}({T}^{4/5})$ constraint violation bounds.
Cite
Text
Bai et al. "Learning General Parameterized Policies for Infinite Horizon Average Reward Constrained MDPs via Primal-Dual Policy Gradient Algorithm." Neural Information Processing Systems, 2024. doi:10.52202/079017-3447Markdown
[Bai et al. "Learning General Parameterized Policies for Infinite Horizon Average Reward Constrained MDPs via Primal-Dual Policy Gradient Algorithm." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/bai2024neurips-learning/) doi:10.52202/079017-3447BibTeX
@inproceedings{bai2024neurips-learning,
title = {{Learning General Parameterized Policies for Infinite Horizon Average Reward Constrained MDPs via Primal-Dual Policy Gradient Algorithm}},
author = {Bai, Qinbo and Mondal, Washim Uddin and Aggarwal, Vaneet},
booktitle = {Neural Information Processing Systems},
year = {2024},
doi = {10.52202/079017-3447},
url = {https://mlanthology.org/neurips/2024/bai2024neurips-learning/}
}