Design Considerations in Offline Preference-Based RL
Abstract
Offline algorithms for Reinforcement Learning from Human Preferences (RLHF), which use only a fixed dataset of sampled responses given an input, and preference feedback among these responses, have gained increasing prominence in the literature on aligning language models. In this paper, we study how the different design choices made in methods such as DPO, IPO, SLiC and many variants influence the quality of the learned policy, from a theoretical perspective. Our treatment yields insights into the choices of loss function, the policy which is used to normalize log-likelihoods, and also the role of the data sampling policy. Notably, our results do not rely on the standard reparameterization-style arguments used to motivate some of the algorithms in this family, which allows us to give a unified treatment to a broad class of methods. We also conduct a small empirical study to verify some of the theoretical findings on a standard summarization benchmark.
Cite
Text
Agarwal et al. "Design Considerations in Offline Preference-Based RL." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Agarwal et al. "Design Considerations in Offline Preference-Based RL." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/agarwal2025icml-design/)BibTeX
@inproceedings{agarwal2025icml-design,
title = {{Design Considerations in Offline Preference-Based RL}},
author = {Agarwal, Alekh and Dann, Christoph and Marinov, Teodor Vanislavov},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {499-512},
volume = {267},
url = {https://mlanthology.org/icml/2025/agarwal2025icml-design/}
}