Self-Improving Transformers Overcome Easy-to-Hard and Length Generalization Challenges

Abstract

Large language models often struggle with length generalization and solving complex problem instances beyond their training distribution. We present a self-improvement approach where models iteratively generate and learn from their own solutions, progressively tackling harder problems while maintaining a standard transformer architecture. Across diverse tasks including arithmetic, string manipulation, and maze solving, self-improving enables models to solve problems far beyond their initial training distribution—for instance, generalizing from 10-digit to 100-digit addition without apparent saturation. We observe that in some cases filtering for correct self-generated examples leads to exponential improvements in out-of-distribution performance across training rounds. Additionally, starting from pretrained models significantly accelerates this self-improvement process for several tasks. Our results demonstrate how controlled weak-to-strong curricula can systematically teach a model logical extrapolation without any changes to the positional embeddings, or the model architecture.

Cite

Text

Lee et al. "Self-Improving Transformers Overcome Easy-to-Hard and Length Generalization Challenges." ICLR 2025 Workshops: SSI-FM, 2025.

Markdown

[Lee et al. "Self-Improving Transformers Overcome Easy-to-Hard and Length Generalization Challenges." ICLR 2025 Workshops: SSI-FM, 2025.](https://mlanthology.org/iclrw/2025/lee2025iclrw-selfimproving/)

BibTeX

@inproceedings{lee2025iclrw-selfimproving,
  title     = {{Self-Improving Transformers Overcome Easy-to-Hard and Length Generalization Challenges}},
  author    = {Lee, Nayoung and Cai, Ziyang and Schwarzschild, Avi and Lee, Kangwook and Papailiopoulos, Dimitris},
  booktitle = {ICLR 2025 Workshops: SSI-FM},
  year      = {2025},
  url       = {https://mlanthology.org/iclrw/2025/lee2025iclrw-selfimproving/}
}