EvoLM: In Search of Lost Training Dynamics for Language Model Reasoning

Abstract

Modern language model (LM) training has been divided into multiple stages, making it difficult for downstream developers to evaluate the impact of design choices made at each stage. We present EvoLM, a model suite that enables systematic and transparent analysis of LMs' training dynamics across pre-training, continued pre-training, supervised fine-tuning, and reinforcement learning. By training over 100 LMs with 1B and 4B parameters from scratch, we rigorously evaluate both upstream (language modeling) and downstream (problem-solving) reasoning capabilities, including considerations of both in-domain and out-of-domain generalization. Key insights highlight the diminishing returns from excessive pre-training and post-training, the importance and practices of mitigating forgetting during domain-specific continued pre-training, the crucial role of continued pre-training in bridging pre-training and post-training phases, and various intricate trade-offs when configuring supervised fine-tuning and reinforcement learning. To facilitate open research and reproducibility, we release all pre-trained and post-trained models, training datasets for all stages, and our entire training and evaluation pipeline.

Cite

Text

Qi et al. "EvoLM: In Search of Lost Training Dynamics for Language Model Reasoning." Advances in Neural Information Processing Systems, 2025.

Markdown

[Qi et al. "EvoLM: In Search of Lost Training Dynamics for Language Model Reasoning." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/qi2025neurips-evolm/)

BibTeX

@inproceedings{qi2025neurips-evolm,
  title     = {{EvoLM: In Search of Lost Training Dynamics for Language Model Reasoning}},
  author    = {Qi, Zhenting and Nie, Fan and Alahi, Alexandre and Zou, James and Lakkaraju, Himabindu and Du, Yilun and Xing, Eric P. and Kakade, Sham M. and Zhang, Hanlin},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/qi2025neurips-evolm/}
}