Online Iterative Reinforcement Learning from Human Feedback with General Preference Model

Abstract

We investigate Reinforcement Learning from Human Feedback (RLHF) in the context of a general preference oracle. In particular, we do not assume the existence of a reward function and an oracle preference signal drawn from the Bradley-Terry model as most of the prior works do. We consider a standard mathematical formulation, the reverse-KL regularized minimax game between two LLMs for RLHF under general preference oracle. The learning objective of this formulation is to find a policy so that it is consistently preferred by the KL-regularized preference oracle over any competing LLMs. We show that this framework is strictly more general than the reward-based one, and propose sample-efficient algorithms for both the offline learning from a pre-collected preference dataset and online learning where we can query the preference oracle along the way of training. Empirical studies verify the effectiveness of the proposed framework.

Cite

Text

Ye et al. "Online Iterative Reinforcement Learning from Human Feedback with General Preference Model." Neural Information Processing Systems, 2024. doi:10.52202/079017-2598

Markdown

[Ye et al. "Online Iterative Reinforcement Learning from Human Feedback with General Preference Model." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/ye2024neurips-online/) doi:10.52202/079017-2598

BibTeX

@inproceedings{ye2024neurips-online,
  title     = {{Online Iterative Reinforcement Learning from Human Feedback with General Preference Model}},
  author    = {Ye, Chenlu and Xiong, Wei and Zhang, Yuheng and Dong, Hanze and Jiang, Nan and Zhang, Tong},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-2598},
  url       = {https://mlanthology.org/neurips/2024/ye2024neurips-online/}
}