L$^2$M: Mutual Information Scaling Law for Long-Context Language Modeling
Abstract
We present a universal theoretical framework for understanding *long-context language modeling* based on a *bipartite* mutual information scaling law that we rigorously verify in natural language. We demonstrate that bipartite mutual information captures multi-token interactions distinct from and scaling independently of conventional two-point mutual information, and show that this provides a more complete characterization of the dependencies needed for accurately modeling long sequences. Leveraging this scaling law, we formulate the **L**ong-context **L**anguage **M**odeling (**L**$^2$**M**) condition, which lower bounds the necessary scaling of a model's history state—the latent variables responsible for storing past information—for effective long-context modeling. We validate the framework and its predictions on transformer and state-space models. Our work provides a principled foundation to understand long-context modeling and to design more efficient architectures with stronger long-context capabilities, with potential applications beyond natural language.
Cite
Text
Chen et al. "L$^2$M: Mutual Information Scaling Law for Long-Context Language Modeling." Advances in Neural Information Processing Systems, 2025.Markdown
[Chen et al. "L$^2$M: Mutual Information Scaling Law for Long-Context Language Modeling." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/chen2025neurips-2m/)BibTeX
@inproceedings{chen2025neurips-2m,
title = {{L$^2$M: Mutual Information Scaling Law for Long-Context Language Modeling}},
author = {Chen, Zhuo and Comas, Oriol Mayné i and Jin, Zhuotao and Luo, Di and Soljacic, Marin},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/chen2025neurips-2m/}
}