Linguistic Collapse: Neural Collapse in (Large) Language Models
Abstract
Neural collapse ($\mathcal{NC}$) is a phenomenon observed in classification tasks where top-layer representations collapse into their class means, which become equinorm, equiangular and aligned with the classifiers.These behaviors -- associated with generalization and robustness -- would manifest under specific conditions: models are trained towards zero loss, with noise-free labels belonging to balanced classes, which do not outnumber the model's hidden dimension.Recent studies have explored $\mathcal{NC}$ in the absence of one or more of these conditions to extend and capitalize on the associated benefits of ideal geometries.Language modeling presents a curious frontier, as \textit{training by token prediction} constitutes a classification task where none of the conditions exist: the vocabulary is imbalanced and exceeds the embedding dimension; different tokens might correspond to similar contextual embeddings; and large language models (LLMs) in particular are typically only trained for a few epochs.This paper empirically investigates the impact of scaling the architectures and training of causal language models (CLMs) on their progression towards $\mathcal{NC}$.We find that $\mathcal{NC}$ properties that develop with scale (and regularization) are linked to generalization.Moreover, there is evidence of some relationship between $\mathcal{NC}$ and generalization independent of scale.Our work thereby underscores the generality of $\mathcal{NC}$ as it extends to the novel and more challenging setting of language modeling.Downstream, we seek to inspire further research on the phenomenon to deepen our understanding of LLMs -- and neural networks at large -- and improve existing architectures based on $\mathcal{NC}$-related properties. Our code is hosted on GitHub: (https://github.com/rhubarbwu/linguistic-collapse).
Cite
Text
Wu and Papyan. "Linguistic Collapse: Neural Collapse in (Large) Language Models." Neural Information Processing Systems, 2024. doi:10.52202/079017-4366Markdown
[Wu and Papyan. "Linguistic Collapse: Neural Collapse in (Large) Language Models." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/wu2024neurips-linguistic/) doi:10.52202/079017-4366BibTeX
@inproceedings{wu2024neurips-linguistic,
title = {{Linguistic Collapse: Neural Collapse in (Large) Language Models}},
author = {Wu, Robert and Papyan, Vardan},
booktitle = {Neural Information Processing Systems},
year = {2024},
doi = {10.52202/079017-4366},
url = {https://mlanthology.org/neurips/2024/wu2024neurips-linguistic/}
}