Exploring and Addressing Reward Confusion in Offline Preference Learning

Abstract

Spurious correlations in a reward model's training data can prevent Reinforcement Learning from Human Feedback (RLHF) from identifying the desired goal and induce unwanted behaviors. In this work, we study the reward confusion problem in offline RLHF where spurious correlations exist in data. We create a lightweight benchmark to study this problem and propose a method that can reduce reward confusion by leveraging model uncertainty and the transitivity of preferences with active learning.

Cite

Text

Chen et al. "Exploring and Addressing Reward Confusion in Offline Preference Learning." NeurIPS 2024 Workshops: BDU, 2024.

Markdown

[Chen et al. "Exploring and Addressing Reward Confusion in Offline Preference Learning." NeurIPS 2024 Workshops: BDU, 2024.](https://mlanthology.org/neuripsw/2024/chen2024neuripsw-exploring/)

BibTeX

@inproceedings{chen2024neuripsw-exploring,
  title     = {{Exploring and Addressing Reward Confusion in Offline Preference Learning}},
  author    = {Chen, Xin and Toyer, Sam and Shkurti, Florian},
  booktitle = {NeurIPS 2024 Workshops: BDU},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/chen2024neuripsw-exploring/}
}