When Can Proxies Improve the Sample Complexity of Preference Learning?
Abstract
We address the problem of reward hacking, where maximising a proxy reward does not necessarily increase the true reward. This is a key concern for Large Language Models (LLMs), as they are often fine-tuned on human preferences that may not accurately reflect a true objective. Existing work uses various tricks such as regularisation, tweaks to the reward model, and reward hacking detectors, to limit the influence that such proxy preferences have on a model. Luckily, in many contexts such as medicine, education, and law, a sparse amount of expert data is often available. In these cases, it is often unclear whether the addition of proxy data can improve policy learning. We outline a set of sufficient conditions on proxy feedback that, if satisfied, indicate that proxy data can provably improve the sample complexity of learning the ground truth policy. These conditions can inform the data collection process for specific tasks. The result implies a parameterisation for LLMs that achieves this improved sample complexity. We detail how one can adapt existing architectures to yield this improved sample complexity.
Cite
Text
Zhu et al. "When Can Proxies Improve the Sample Complexity of Preference Learning?." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Zhu et al. "When Can Proxies Improve the Sample Complexity of Preference Learning?." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/zhu2025icml-proxies/)BibTeX
@inproceedings{zhu2025icml-proxies,
title = {{When Can Proxies Improve the Sample Complexity of Preference Learning?}},
author = {Zhu, Yuchen and De Souza, Daniel Augusto and Shi, Zhengyan and Yang, Mengyue and Minervini, Pasquale and Kusner, Matt and D’Amour, Alexander},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {79790-79814},
volume = {267},
url = {https://mlanthology.org/icml/2025/zhu2025icml-proxies/}
}