Preference Transformer: Modeling Human Preferences Using Transformers for RL
Abstract
Preference-based reinforcement learning (RL) provides a framework to train agents using human preferences between two behaviors. However, preference-based RL has been challenging to scale since it requires a large amount of human feedback to learn a reward function aligned with human intent. In this paper, we present Preference Transformer, a neural architecture that models human preferences using transformers. Unlike prior approaches assuming human judgment is based on the Markovian rewards which contribute to the decision equally, we introduce a new preference model based on the weighted sum of non-Markovian rewards. We then design the proposed preference model using a transformer architecture that stacks causal and bidirectional self-attention layers. We demonstrate that Preference Transformer can solve a variety of control tasks using real human preferences, while prior approaches fail to work. We also show that Preference Transformer can induce a well-specified reward and attend to critical events in the trajectory by automatically capturing the temporal dependencies in human decision-making. Code is available on the project website: https://sites.google.com/view/preference-transformer.
Cite
Text
Kim et al. "Preference Transformer: Modeling Human Preferences Using Transformers for RL." International Conference on Learning Representations, 2023.Markdown
[Kim et al. "Preference Transformer: Modeling Human Preferences Using Transformers for RL." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/kim2023iclr-preference/)BibTeX
@inproceedings{kim2023iclr-preference,
title = {{Preference Transformer: Modeling Human Preferences Using Transformers for RL}},
author = {Kim, Changyeon and Park, Jongjin and Shin, Jinwoo and Lee, Honglak and Abbeel, Pieter and Lee, Kimin},
booktitle = {International Conference on Learning Representations},
year = {2023},
url = {https://mlanthology.org/iclr/2023/kim2023iclr-preference/}
}