Behaviour Preference Regression for Offline Reinforcement Learning
Abstract
Offline reinforcement learning (RL) methods aim to learn optimal policies with access only to trajectories in a fixed dataset. Policy constraint methods formulate policy learning as an optimization problem that balances maximizing reward with minimizing deviation from the behavior policy. Closed form solutions to this problem can be derived as weighted behavioral cloning objectives that, in theory, must compute an intractable partition function. Reinforcement learning has gained popularity in language modeling to align models with human preferences; some recent works consider paired completions that are ranked by a preference model following which the likelihood of the preferred completion is directly increased. We adapt this approach of paired comparison. By reformulating the paired-sample optimization problem, we fit the maximum-mode of the Q function while maximizing behavioral consistency of policy actions. This yields our algorithm, Behavior Preference Regression for offline RL (BPR). We empirically evaluate BPR on the widely used D4RL Locomotion and Antmaze datasets, as well as the more challenging V-D4RL suite, which operates in image-based state spaces. BPR demonstrates state-of-the-art performance over all domains. Our on-policy experiments suggest that BPR takes advantage of the stability of on-policy value functions with minimal performance degradation on Locomotion datasets.
Cite
Text
Srinivasan and Knottenbelt. "Behaviour Preference Regression for Offline Reinforcement Learning." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I19.34267Markdown
[Srinivasan and Knottenbelt. "Behaviour Preference Regression for Offline Reinforcement Learning." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/srinivasan2025aaai-behaviour/) doi:10.1609/AAAI.V39I19.34267BibTeX
@inproceedings{srinivasan2025aaai-behaviour,
title = {{Behaviour Preference Regression for Offline Reinforcement Learning}},
author = {Srinivasan, Padmanaba and Knottenbelt, William},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2025},
pages = {20575-20583},
doi = {10.1609/AAAI.V39I19.34267},
url = {https://mlanthology.org/aaai/2025/srinivasan2025aaai-behaviour/}
}