Maximizing Intermediate Checkpoint Value in LLM Pretraining with Bayesian Optimization

Abstract

The rapid proliferation of large language models (LLMs), such as GPT-4 and Gemini, underscores the intense demand for resources during their training processes, posing significant challenges due to substantial computational and environmental costs. In this paper, we introduce a novel checkpoint merging strategy aimed at making efficient use of intermediate checkpoints during LLM pretraining. This method utilizes intermediate checkpoints with shared training trajectories, and is rooted in an extensive search space exploration for the best merging weight via Bayesian optimization. Through various experiments, we demonstrate that: (1) Our proposed methodology exhibits the capacity to augment pretraining, presenting an opportunity akin to obtaining substantial benefits at minimal cost; (2) Our proposed methodology, despite requiring a given held-out dataset, still demonstrates robust generalization capabilities across diverse domains, a pivotal aspect in pretraining.

Cite

Text

Liu et al. "Maximizing Intermediate Checkpoint Value in LLM Pretraining with Bayesian Optimization." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Liu et al. "Maximizing Intermediate Checkpoint Value in LLM Pretraining with Bayesian Optimization." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/liu2025icml-maximizing/)

BibTeX

@inproceedings{liu2025icml-maximizing,
  title     = {{Maximizing Intermediate Checkpoint Value in LLM Pretraining with Bayesian Optimization}},
  author    = {Liu, Deyuan and Wang, Zecheng and Wang, Bingning and Chen, Weipeng and Li, Chunshan and Tu, Zhiying and Chu, Dianhui and Sui, Dianbo},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {39713-39741},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/liu2025icml-maximizing/}
}