Policy Gradient in Robust MDPs with Global Convergence Guarantee

Abstract

Robust Markov decision processes (RMDPs) provide a promising framework for computing reliable policies in the face of model errors. Many successful reinforcement learning algorithms build on variations of policy-gradient methods, but adapting these methods to RMDPs has been challenging. As a result, the applicability of RMDPs to large, practical domains remains limited. This paper proposes a new Double-Loop Robust Policy Gradient (DRPG), the first generic policy gradient method for RMDPs. In contrast with prior robust policy gradient algorithms, DRPG monotonically reduces approximation errors to guarantee convergence to a globally optimal policy in tabular RMDPs. We introduce a novel parametric transition kernel and solve the inner loop robust policy via a gradient-based method. Finally, our numerical results demonstrate the utility of our new algorithm and confirm its global convergence properties.

Cite

Text

Wang et al. "Policy Gradient in Robust MDPs with Global Convergence Guarantee." International Conference on Machine Learning, 2023.

Markdown

[Wang et al. "Policy Gradient in Robust MDPs with Global Convergence Guarantee." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/wang2023icml-policy/)

BibTeX

@inproceedings{wang2023icml-policy,
  title     = {{Policy Gradient in Robust MDPs with Global Convergence Guarantee}},
  author    = {Wang, Qiuhao and Ho, Chin Pang and Petrik, Marek},
  booktitle = {International Conference on Machine Learning},
  year      = {2023},
  pages     = {35763-35797},
  volume    = {202},
  url       = {https://mlanthology.org/icml/2023/wang2023icml-policy/}
}