RLHF and IIA: Perverse Incentives
Abstract
Existing algorithms for reinforcement learning from human feedback (RLHF) can incentivize responses at odds with preferences because they are based on models that assume independence of irrelevant alternatives (IIA). The perverse incentives induced by IIA hinder innovations on query formats and learning algorithms.
Cite
Text
Xu et al. "RLHF and IIA: Perverse Incentives." ICML 2024 Workshops: MFHAIA, 2024.Markdown
[Xu et al. "RLHF and IIA: Perverse Incentives." ICML 2024 Workshops: MFHAIA, 2024.](https://mlanthology.org/icmlw/2024/xu2024icmlw-rlhf/)BibTeX
@inproceedings{xu2024icmlw-rlhf,
title = {{RLHF and IIA: Perverse Incentives}},
author = {Xu, Wanqiao and Dong, Shi and Lu, Xiuyuan and Lam, Grace and Wen, Zheng and Van Roy, Benjamin},
booktitle = {ICML 2024 Workshops: MFHAIA},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/xu2024icmlw-rlhf/}
}