User-Oriented Robust Reinforcement Learning

Abstract

Recently, improving the robustness of policies across different environments attracts increasing attention in the reinforcement learning (RL) community. Existing robust RL methods mostly aim to achieve the max-min robustness by optimizing the policy’s performance in the worst-case environment. However, in practice, a user that uses an RL policy may have different preferences over its performance across environments. Clearly, the aforementioned max-min robustness is oftentimes too conservative to satisfy user preference. Therefore, in this paper, we integrate user preference into policy learning in robust RL, and propose a novel User-Oriented Robust RL (UOR-RL) framework. Specifically, we define a new User-Oriented Robustness (UOR) metric for RL, which allocates different weights to the environments according to user preference and generalizes the max-min robustness metric. To optimize the UOR metric, we develop two different UOR-RL training algorithms for the scenarios with or without a priori known environment distribution, respectively. Theoretically, we prove that our UOR-RL training algorithms converge to near-optimal policies even with inaccurate or completely no knowledge about the environment distribution. Furthermore, we carry out extensive experimental evaluations in 6 MuJoCo tasks. The experimental results demonstrate that UOR-RL is comparable to the state-of-the-art baselines under the average-case and worst-case performance metrics, and more importantly establishes new state-of-the-art performance under the UOR metric.

Cite

Text

You et al. "User-Oriented Robust Reinforcement Learning." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I12.26781

Markdown

[You et al. "User-Oriented Robust Reinforcement Learning." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/you2023aaai-user/) doi:10.1609/AAAI.V37I12.26781

BibTeX

@inproceedings{you2023aaai-user,
  title     = {{User-Oriented Robust Reinforcement Learning}},
  author    = {You, Haoyi and Yu, Beichen and Jin, Haiming and Yang, Zhaoxing and Sun, Jiahui},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2023},
  pages     = {15269-15277},
  doi       = {10.1609/AAAI.V37I12.26781},
  url       = {https://mlanthology.org/aaai/2023/you2023aaai-user/}
}