Segmenting Text and Learning Their Rewards for Improved RLHF in Language Model
Abstract
Reinforcement learning from human feedback (RLHF) has been widely adopted to align language models (LMs) with human preference. Previous RLHF works typically take a bandit formulation, which, though intuitive, ignores the sequential nature of LM generation and can suffer from the sparse reward issue. While recent works propose dense token-level RLHF, treating each token as an action may be oversubtle to proper reward assignment. In this paper, we seek to get the best of both by training and utilizing a segment-level reward model, which assigns a reward to each semantically complete text segment that spans over a short sequence of tokens. For reward learning, our method allows dynamic text segmentation and compatibility with standard sequence-preference datasets. For effective RL-based LM training against segment reward, we generalize the classical scalar bandit reward normalizers into location-aware normalizer functions and interpolate the segment reward for further densification. Our method performs competitively on three popular RLHF benchmarks for LM policy: AlpacaEval 2.0, Arena-Hard, and MT-Bench. Ablation studies are conducted to further demonstrate our method.
Cite
Text
Yin et al. "Segmenting Text and Learning Their Rewards for Improved RLHF in Language Model." Transactions on Machine Learning Research, 2025.Markdown
[Yin et al. "Segmenting Text and Learning Their Rewards for Improved RLHF in Language Model." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/yin2025tmlr-segmenting/)BibTeX
@article{yin2025tmlr-segmenting,
title = {{Segmenting Text and Learning Their Rewards for Improved RLHF in Language Model}},
author = {Yin, Yueqin and Yang, Shentao and Xie, Yujia and Yang, Ziyi and Sun, Yuting and Awadalla, Hany Hassan and Chen, Weizhu and Zhou, Mingyuan},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://mlanthology.org/tmlr/2025/yin2025tmlr-segmenting/}
}