Avoiding exp(R) Scaling in RLHF Through Preference-Based Exploration

Abstract

Reinforcement Learning from Human Feedback (RLHF) has emerged as a pivotal technique for large language model (LLM) alignment. This paper studies the setting of online RLHF and focuses on improving its sample efficiency. All existing algorithms for online RLHF, whether doing passive exploration or active exploration, suffer from a sample complexity that scales exponentially with the range of the reward function. This statistical inefficiency hinders their effectiveness in scenarios with heavily skewed preferences, e.g. questions with objectively correct answers. To address this, we introduce Self-Exploring Preference-Incentive Online Preference Optimization (SE-POPO), an online RLHF algorithm that for the first time achieves a sample complexity that scales polynomially with the reward range, answering an open problem raised by Xie et al. [2024]. Theoretically, we demonstrate that the sample complexity of SE-POPO dominates that of existing exploration algorithms. Empirically, our systematic evaluation confirms that SE-POPO is more sample-efficient than both exploratory and non-exploratory baselines, in two primary application scenarios of RLHF as well as on public benchmarks, marking a significant step forward in RLHF algorithm design.

Cite

Text

Chen et al. "Avoiding exp(R) Scaling in RLHF Through Preference-Based Exploration." Advances in Neural Information Processing Systems, 2025.

Markdown

[Chen et al. "Avoiding exp(R) Scaling in RLHF Through Preference-Based Exploration." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/chen2025neurips-avoiding/)

BibTeX

@inproceedings{chen2025neurips-avoiding,
  title     = {{Avoiding exp(R) Scaling in RLHF Through Preference-Based Exploration}},
  author    = {Chen, Mingyu and Chen, Yiding and Sun, Wen and Zhang, Xuezhou},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/chen2025neurips-avoiding/}
}