SkyLadder: Better and Faster Pretraining via Context Window Scheduling

Abstract

Recent advancements in LLM pretraining have featured ever-expanding context windows to process longer sequences. However, our controlled study reveals that models pretrained with shorter context windows consistently outperform their long-context counterparts under a fixed token budget. This finding motivates us to explore an optimal context window scheduling strategy to better balance long-context capability with pretraining efficiency. To this end, we propose SkyLadder, a simple yet effective approach that implements a short-to-long context window transition. SkyLadder preserves strong standard benchmark performance, while matching or exceeding baseline results on long-context tasks. Through extensive experiments, we pretrain 1B-parameter models (up to 32K context) and 3B-parameter models (8K context) on 100B tokens, demonstrating that SkyLadder yields consistent gains of up to 3.7% on common benchmarks, while achieving up to 22% faster training speeds compared to baselines.

Cite

Text

Zhu et al. "SkyLadder: Better and Faster Pretraining via Context Window Scheduling." Advances in Neural Information Processing Systems, 2025.

Markdown

[Zhu et al. "SkyLadder: Better and Faster Pretraining via Context Window Scheduling." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/zhu2025neurips-skyladder/)

BibTeX

@inproceedings{zhu2025neurips-skyladder,
  title     = {{SkyLadder: Better and Faster Pretraining via Context Window Scheduling}},
  author    = {Zhu, Tongyao and Liu, Qian and Wang, Haonan and Chen, Shiqi and Gu, Xiangming and Pang, Tianyu and Kan, Min-Yen},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/zhu2025neurips-skyladder/}
}