Sample-Efficient Constrained Reinforcement Learning with General Parameterization

Abstract

We consider a constrained Markov Decision Problem (CMDP) where the goal of an agent is to maximize the expected discounted sum of rewards over an infinite horizon while ensuring that the expected discounted sum of costs exceeds a certain threshold. Building on the idea of momentum-based acceleration, we develop the Primal-Dual Accelerated Natural Policy Gradient (PD-ANPG) algorithm that ensures an $\epsilon$ global optimality gap and $\epsilon$ constraint violation with $\tilde{\mathcal{O}}((1-\gamma)^{-7}\epsilon^{-2})$ sample complexity for general parameterized policies where $\gamma$ denotes the discount factor. This improves the state-of-the-art sample complexity in general parameterized CMDPs by a factor of $\mathcal{O}((1-\gamma)^{-1}\epsilon^{-2})$ and achieves the theoretical lower bound in $\epsilon^{-1}$.

Cite

Text

Mondal and Aggarwal. "Sample-Efficient Constrained Reinforcement Learning with General Parameterization." Neural Information Processing Systems, 2024. doi:10.52202/079017-2184

Markdown

[Mondal and Aggarwal. "Sample-Efficient Constrained Reinforcement Learning with General Parameterization." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/mondal2024neurips-sampleefficient/) doi:10.52202/079017-2184

BibTeX

@inproceedings{mondal2024neurips-sampleefficient,
  title     = {{Sample-Efficient Constrained Reinforcement Learning with General Parameterization}},
  author    = {Mondal, Washim Uddin and Aggarwal, Vaneet},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-2184},
  url       = {https://mlanthology.org/neurips/2024/mondal2024neurips-sampleefficient/}
}