Language Model Behavioral Phases Are Consistent Across Architecture, Training Data, and Scale

Abstract

We show that across architecture (Transformer vs. Mamba vs. RWKV), training dataset (OpenWebText vs. The Pile), and scale (14 million parameters to 12 billion parameters), autoregressive language models exhibit highly consistent patterns of change in their behavior over the course of pretraining. Based on our analysis of over 1,400 language model checkpoints on over 110,000 tokens of English, we find that up to 98% of the variance in language model behavior at the word level can be explained by three simple heuristics: the unigram probability (frequency) of a given word, the $n$-gram probability of the word, and the semantic similarity between the word and its context. Furthermore, we see consistent behavioral phases in all language models, with their predicted probabilities for words overfitting to those words' $n$-gram probabilities for increasing $n$ over the course of training. Taken together, these results suggest that learning in neural language models may follow a similar trajectory irrespective of model details.

Cite

Text

Michaelov et al. "Language Model Behavioral Phases Are Consistent Across Architecture, Training Data, and Scale." Advances in Neural Information Processing Systems, 2025.

Markdown

[Michaelov et al. "Language Model Behavioral Phases Are Consistent Across Architecture, Training Data, and Scale." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/michaelov2025neurips-language/)

BibTeX

@inproceedings{michaelov2025neurips-language,
  title     = {{Language Model Behavioral Phases Are Consistent Across Architecture, Training Data, and Scale}},
  author    = {Michaelov, James A. and Levy, Roger P. and Bergen, Ben},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/michaelov2025neurips-language/}
}