AREAL: A Large-Scale Asynchronous Reinforcement Learning System for Language Reasoning

Abstract

Reinforcement learning (RL) has become a trending paradigm for training large language models (LLMs), particularly for reasoning tasks. Effective RL for LLMs requires massive parallelization and poses an urgent need for efficient training systems. Most existing large-scale RL systems for LLMs are synchronous by alternating generation and training in a batch setting, where the rollouts in each training batch are generated by the same (or latest) model. This stabilizes RL training but suffers from severe system-level inefficiency. Generation must wait until the longest output in the batch is completed before model update, resulting in GPU underutilization. We present AReaL, a fully asynchronous RL system that completely decouples generation from training. Rollout workers in AReaL continuously generate new outputs without waiting, while training workers update the model whenever a batch of data is collected. AReaL also incorporates a collection of system-level optimizations, leading to substantially higher GPU utilization. To stabilize RL training, AReaL balances the workload of rollout and training workers to control data staleness, and adopts a staleness-enhanced PPO variant to better handle outdated training samples. Extensive experiments on math and code reasoning benchmarks show that AReaL achieves up to 2.77x training speedup compared to synchronous systems with the same number of GPUs and matched or even improved final performance. The code of AReaL is available at https://github.com/inclusionAI/AReaL/.

Cite

Text

Fu et al. "AREAL: A Large-Scale Asynchronous Reinforcement Learning System for Language Reasoning." Advances in Neural Information Processing Systems, 2025.

Markdown

[Fu et al. "AREAL: A Large-Scale Asynchronous Reinforcement Learning System for Language Reasoning." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/fu2025neurips-areal/)

BibTeX

@inproceedings{fu2025neurips-areal,
  title     = {{AREAL: A Large-Scale Asynchronous Reinforcement Learning System for Language Reasoning}},
  author    = {Fu, Wei and Gao, Jiaxuan and Shen, Xujie and Zhu, Chen and Mei, Zhiyu and He, Chuyi and Xu, Shusheng and Wei, Guo and Mei, Jun and Jiashu, Wang and Yang, Tongkai and Yuan, Binhang and Wu, Yi},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/fu2025neurips-areal/}
}