Aligning Crowd Feedback via Distributional Preference Reward Modeling
Abstract
Deep Reinforcement Learning is widely used to align large language models (LLM) with human preference. However, the conventional reward modelling predominantly depends on human annotations provided by a select cohort of individuals. Such dependence may unintentionally result in skewed models that reflect the inclinations of these annotators, thereby failing to represent the wider population's expectations adequately. We propose the Distributional Preference Reward Model (DPRM), a simple yet effective framework to align large language models with diverse human preferences. To this end, we characterize multiple preferences by a categorical distribution and introduce a Bayesian updater to accommodate shifted or new preferences. On top of that, we design an optimal-transportation-based loss to calibrate DPRM to align with the preference distribution. Finally, the expected reward is utilized to fine-tune an LLM policy to generate responses favoured by the population. Our experiments show that DPRM significantly enhances the alignment of LLMs with population preference, yielding more accurate, unbiased, and contextually appropriate responses.
Cite
Text
Li et al. "Aligning Crowd Feedback via Distributional Preference Reward Modeling." ICML 2024 Workshops: MFHAIA, 2024.Markdown
[Li et al. "Aligning Crowd Feedback via Distributional Preference Reward Modeling." ICML 2024 Workshops: MFHAIA, 2024.](https://mlanthology.org/icmlw/2024/li2024icmlw-aligning/)BibTeX
@inproceedings{li2024icmlw-aligning,
title = {{Aligning Crowd Feedback via Distributional Preference Reward Modeling}},
author = {Li, Dexun and Zhang, Cong and Dong, Kuicai and Deik, Derrick Goh Xin and Tang, Ruiming and Liu, Yong},
booktitle = {ICML 2024 Workshops: MFHAIA},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/li2024icmlw-aligning/}
}