Asynchronous RLHF: Faster and More Efficient Off-Policy RL for Language Models

Abstract

The dominant paradigm for RLHF is *online* and *on-policy* RL: synchronously generating from the large language model (LLM) policy, labelling with a reward model, and learning using feedback on the LLM's own outputs. While performant, this paradigm is computationally inefficient. Inspired by classical deep RL literature, we propose separating generation and learning in RLHF. This enables asynchronous generation of new samples while simultaneously training on old samples, leading to faster training and more compute-optimal scaling. However, asynchronous training relies on an underexplored regime, online but *off-policy* RLHF: learning on samples from previous iterations of our model which give a worse training signal. We tackle the fundamental challenge in this regime: how much off-policyness can we tolerate for asynchronous training to speed up learning but maintain performance? Among several RLHF algorithms we test, online DPO is found to be most robust to off-policy data, and robustness increases with the scale of the policy model. We study further compute optimizations for asynchronous RLHF but find that they come at a performance cost, giving rise to a trade-off. We verify the scalability of asynchronous RLHF by training a general-purpose chatbot from LLaMA 3.1 8B on an instruction-following task $\sim$40\% faster than a synchronous run while matching final performance. Finally, we extend our results to math and reasoning to demonstrate asynchronous RL can finetune Rho 1B on GSM8k $\sim$70\% faster while matching synchronous accuracy.

Cite

Text

Noukhovitch et al. "Asynchronous RLHF: Faster and More Efficient Off-Policy RL for Language Models." International Conference on Learning Representations, 2025.

Markdown

[Noukhovitch et al. "Asynchronous RLHF: Faster and More Efficient Off-Policy RL for Language Models." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/noukhovitch2025iclr-asynchronous/)

BibTeX

@inproceedings{noukhovitch2025iclr-asynchronous,
  title     = {{Asynchronous RLHF: Faster and More Efficient Off-Policy RL for Language Models}},
  author    = {Noukhovitch, Michael and Huang, Shengyi and Xhonneux, Sophie and Hosseini, Arian and Agarwal, Rishabh and Courville, Aaron},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/noukhovitch2025iclr-asynchronous/}
}