Learning Optimal Advantage from Preferences and Mistaking It for Reward
Abstract
We consider algorithms for learning reward functions from human preferences over pairs of trajectory segments---as used in reinforcement learning from human feedback (RLHF)---including those used to fine tune ChatGPT and other contemporary language models. Most recent work on such algorithms assumes that human preferences are generated based only upon the reward accrued within those segments, which we call their partial return function. But if this assumption is false because people base their preferences on information other than partial return, then what type of function is their algorithm learning from preferences? We argue that this function is better thought of as an approximation of the optimal advantage function, not as a partial return function as previously believed.
Cite
Text
Knox et al. "Learning Optimal Advantage from Preferences and Mistaking It for Reward." ICML 2023 Workshops: MFPL, 2023.Markdown
[Knox et al. "Learning Optimal Advantage from Preferences and Mistaking It for Reward." ICML 2023 Workshops: MFPL, 2023.](https://mlanthology.org/icmlw/2023/knox2023icmlw-learning/)BibTeX
@inproceedings{knox2023icmlw-learning,
title = {{Learning Optimal Advantage from Preferences and Mistaking It for Reward}},
author = {Knox, W. Bradley and Hatgis-Kessell, Stephane and Adalgeirsson, Sigurdur Orn and Booth, Serena and Dragan, Anca and Stone, Peter and Niekum, Scott},
booktitle = {ICML 2023 Workshops: MFPL},
year = {2023},
url = {https://mlanthology.org/icmlw/2023/knox2023icmlw-learning/}
}