Robust Reinforcement Learning in Continuous Control Tasks with Uncertainty Set Regularization

Abstract

Reinforcement learning (RL) is recognized as lacking generalization and robustness under environmental perturbations, which excessively restricts its application for real-world robotics. Prior work claimed that adding regularization to the value function is equivalent to learning a robust policy under uncertain transitions. Although the regularization-robustness transformation is appealing for its simplicity and efficiency, it is still lacking in continuous control tasks. In this paper, we propose a new regularizer named Uncertainty Set Regularizer (USR), to formulate the uncertainty set on the parametric space of a transition function. To deal with unknown uncertainty sets, we further propose a novel adversarial approach to generate them based on the value function. We evaluate USR on the Real-world Reinforcement Learning (RWRL) benchmark and the Unitree A1 Robot, demonstrating improvements in the robust performance of perturbed testing environments and sim-to-real scenarios.

Cite

Text

Zhang et al. "Robust Reinforcement Learning in Continuous Control Tasks with Uncertainty Set Regularization." Conference on Robot Learning, 2023.

Markdown

[Zhang et al. "Robust Reinforcement Learning in Continuous Control Tasks with Uncertainty Set Regularization." Conference on Robot Learning, 2023.](https://mlanthology.org/corl/2023/zhang2023corl-robust/)

BibTeX

@inproceedings{zhang2023corl-robust,
  title     = {{Robust Reinforcement Learning in Continuous Control Tasks with Uncertainty Set Regularization}},
  author    = {Zhang, Yuan and Wang, Jianhong and Boedecker, Joschka},
  booktitle = {Conference on Robot Learning},
  year      = {2023},
  pages     = {1400-1424},
  volume    = {229},
  url       = {https://mlanthology.org/corl/2023/zhang2023corl-robust/}
}