RA-PbRL: Provably Efficient Risk-Aware Preference-Based Reinforcement Learning

Abstract

Reinforcement Learning from Human Feedback (RLHF) has recently surged in popularity, particularly for aligning large language models and other AI systems with human intentions. At its core, RLHF can be viewed as a specialized instance of Preference-based Reinforcement Learning (PbRL), where the preferences specifically originate from human judgments rather than arbitrary evaluators. Despite this connection, most existing approaches in both RLHF and PbRL primarily focus on optimizing a mean reward objective, neglecting scenarios that necessitate risk-awareness, such as AI safety, healthcare, and autonomous driving. These scenarios often operate under a one-episode-reward setting, which makes conventional risk-sensitive objectives inapplicable. To address this, we explore and prove the applicability of two risk-aware objectives to PbRL : nested and static quantile risk objectives. We also introduce Risk-AwarePbRL (RA-PbRL), an algorithm designed to optimize both nested and static objectives. Additionally, we provide a theoretical analysis of the regret upper bounds, demonstrating that they are sublinear with respect to the number of episodes, and present empirical results to support our findings. Our code is available in https://github.com/aguilarjose11/PbRLNeurips.

Cite

Text

Zhao et al. "RA-PbRL: Provably Efficient Risk-Aware Preference-Based Reinforcement Learning." Neural Information Processing Systems, 2024. doi:10.52202/079017-1945

Markdown

[Zhao et al. "RA-PbRL: Provably Efficient Risk-Aware Preference-Based Reinforcement Learning." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/zhao2024neurips-rapbrl/) doi:10.52202/079017-1945

BibTeX

@inproceedings{zhao2024neurips-rapbrl,
  title     = {{RA-PbRL: Provably Efficient Risk-Aware Preference-Based Reinforcement Learning}},
  author    = {Zhao, Yujie and Escamill, Jose Efraim Aguilar and Lu, Weyl and Wang, Huazheng},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-1945},
  url       = {https://mlanthology.org/neurips/2024/zhao2024neurips-rapbrl/}
}