Post-Hoc Reward Calibration: A Case Study on Length Bias

Abstract

Reinforcement Learning from Human Feedback aligns the outputs of Large Language Models with human values and preferences. Central to this process is the reward model (RM), which translates human feedback into training signals for optimising LLM behaviour. However, RMs can develop biases by exploiting spurious correlations in their training data, such as favouring outputs based on length or style rather than true quality. These biases can lead to incorrect output rankings, sub-optimal model evaluations, and the amplification of undesirable behaviours in LLMs alignment. This paper addresses the challenge of correcting such biases without additional data and training, introducing the concept of Post-hoc Reward Calibration. We first propose to use local average reward to estimate the bias term and, thus, remove it to approximate the underlying true reward. We then extend the approach to a more general and robust form with the Locally Weighted Regression. Focusing on the prevalent length bias, we validate our proposed approaches across three experimental settings, demonstrating consistent improvements: (1) a 3.11 average performance gain across 33 reward models on the RewardBench dataset; (2) improved agreement of RM produced rankings with GPT-4 evaluations and human preferences based on the AlpacaEval benchmark; and (3) improved Length-Controlled win rate (Dubois et al., 2024) of the RLHF process in multiple LLM–RM combinations. According to our experiments, our method is computationally efficient and generalisable to other types of bias and RMs, offering a scalable and robust solution for mitigating biases in LLM alignment and evaluation.

Cite

Text

Huang et al. "Post-Hoc Reward Calibration: A Case Study on Length Bias." International Conference on Learning Representations, 2025.

Markdown

[Huang et al. "Post-Hoc Reward Calibration: A Case Study on Length Bias." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/huang2025iclr-posthoc/)

BibTeX

@inproceedings{huang2025iclr-posthoc,
  title     = {{Post-Hoc Reward Calibration: A Case Study on Length Bias}},
  author    = {Huang, Zeyu and Qiu, Zihan and Wang, Zili and Ponti, Edoardo and Titov, Ivan},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/huang2025iclr-posthoc/}
}