Personalizing Reinforcement Learning from Human Feedback with Variational Preference Learning

Abstract

Current Reinforcement Learning from Human Feedback (RLHF) techniques cannot account for differences in human preferences across a diverse population. When these differences arise, these frameworks average over them, leading to inaccurate rewards and poor performance for individual subgroups. To address the need for pluralistic alignment, we develop a class of multimodal RLHF methods based on a latent variable formulation - inferring a novel user-specific latent and learning reward models and policies conditioned on this latent without additional user-specific data. While conceptually simple, we show that in practice, this reward modeling requires careful algorithmic considerations around model architecture and reward scaling. To empirically validate our proposed technique, we first show that it can provide a way to combat under-specification in simulated control problems, inferring and optimizing user-specific reward functions. Next, we conduct experiments on pluralistic language datasets representing diverse user preferences and demonstrate improved reward function accuracy. We additionally show the benefits of this probabilistic framework in actively learning user preferences. This work enables learning from diverse populations, an important challenge naturally occurring in problems from robot learning to foundation model alignment.

Cite

Text

Poddar et al. "Personalizing Reinforcement Learning from Human Feedback with Variational Preference Learning." NeurIPS 2024 Workshops: Pluralistic-Alignment, 2024.

Markdown

[Poddar et al. "Personalizing Reinforcement Learning from Human Feedback with Variational Preference Learning." NeurIPS 2024 Workshops: Pluralistic-Alignment, 2024.](https://mlanthology.org/neuripsw/2024/poddar2024neuripsw-personalizing/)

BibTeX

@inproceedings{poddar2024neuripsw-personalizing,
  title     = {{Personalizing Reinforcement Learning from Human Feedback with Variational Preference Learning}},
  author    = {Poddar, Sriyash and Wan, Yanming and Ivison, Hamish and Gupta, Abhishek and Jaques, Natasha},
  booktitle = {NeurIPS 2024 Workshops: Pluralistic-Alignment},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/poddar2024neuripsw-personalizing/}
}