ResponseRank: Data-Efficient Reward Modeling Through Preference Strength Learning
Abstract
Binary choices, as often used for reinforcement learning from human feedback (RLHF), convey only the *direction* of a preference. A person may choose apples over oranges and bananas over grapes, but *which preference is stronger*? Strength is crucial for decision-making under uncertainty and generalization of preference models, but hard to measure reliably. Metadata such as response times and inter-annotator agreement can serve as proxies for strength, but are often noisy and confounded. We propose ResponseRank to address the challenge of learning from noisy strength signals. Our method uses relative differences in proxy signals to *rank responses to pairwise comparisons by their inferred preference strength*. To control for systemic variation, we compare signals only locally within carefully constructed strata. This enables robust learning of utility differences consistent with strength-derived rankings while making minimal assumptions about the strength signal. Our contributions are threefold: (1) ResponseRank, a novel method that robustly learns preference strength by leveraging locally valid relative strength signals; (2) empirical evidence of improved sample efficiency and robustness across diverse tasks: synthetic preference learning (with simulated response times), language modeling (with annotator agreement), and RL control tasks (with simulated episode returns); and (3) the *Pearson Distance Correlation (PDC)*, a novel metric that isolates cardinal utility learning from ordinal accuracy.
Cite
Text
Kaufmann et al. "ResponseRank: Data-Efficient Reward Modeling Through Preference Strength Learning." Advances in Neural Information Processing Systems, 2025.Markdown
[Kaufmann et al. "ResponseRank: Data-Efficient Reward Modeling Through Preference Strength Learning." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/kaufmann2025neurips-responserank/)BibTeX
@inproceedings{kaufmann2025neurips-responserank,
title = {{ResponseRank: Data-Efficient Reward Modeling Through Preference Strength Learning}},
author = {Kaufmann, Timo and Metz, Yannick and Keim, Daniel A. and Hüllermeier, Eyke},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/kaufmann2025neurips-responserank/}
}