A Primal-Dual-Critic Algorithm for Offline Constrained Reinforcement Learning
Abstract
Offline constrained reinforcement learning (RL) aims to learn a policy that maximizes the expected cumulative reward subject to constraints on expected cumulative cost using an existing dataset. In this paper, we propose Primal-Dual-Critic Algorithm (PDCA), a novel algorithm for offline constrained RL with general function approximation. PDCA runs a primal-dual algorithm on the Lagrangian function estimated by critics. The primal player employs a no-regret policy optimization oracle to maximize the Lagrangian estimate and the dual player acts greedily to minimize the Lagrangian estimate. We show that PDCA finds a near saddle point of the Lagrangian, which is nearly optimal for the constrained RL problem. Unlike previous work that requires concentrability and a strong Bellman completeness assumption, PDCA only requires concentrability and realizability assumptions for sample-efficient learning.
Cite
Text
Hong et al. "A Primal-Dual-Critic Algorithm for Offline Constrained Reinforcement Learning." Artificial Intelligence and Statistics, 2024.Markdown
[Hong et al. "A Primal-Dual-Critic Algorithm for Offline Constrained Reinforcement Learning." Artificial Intelligence and Statistics, 2024.](https://mlanthology.org/aistats/2024/hong2024aistats-primaldualcritic/)BibTeX
@inproceedings{hong2024aistats-primaldualcritic,
title = {{A Primal-Dual-Critic Algorithm for Offline Constrained Reinforcement Learning}},
author = {Hong, Kihyuk and Li, Yuhang and Tewari, Ambuj},
booktitle = {Artificial Intelligence and Statistics},
year = {2024},
pages = {280-288},
volume = {238},
url = {https://mlanthology.org/aistats/2024/hong2024aistats-primaldualcritic/}
}