Transformers Learn Through Gradual Rank Increase

Abstract

We identify incremental learning dynamics in transformers, where the difference between trained and initial weights progressively increases in rank. We rigorously prove this occurs under the simplifying assumptions of diagonal weight matrices and small initialization. Our experiments support the theory and also show that phenomenon can occur in practice without the simplifying assumptions.

Cite

Text

Boix-Adsera et al. "Transformers Learn Through Gradual Rank Increase." Neural Information Processing Systems, 2023.

Markdown

[Boix-Adsera et al. "Transformers Learn Through Gradual Rank Increase." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/boixadsera2023neurips-transformers/)

BibTeX

@inproceedings{boixadsera2023neurips-transformers,
  title     = {{Transformers Learn Through Gradual Rank Increase}},
  author    = {Boix-Adsera, Enric and Littwin, Etai and Abbe, Emmanuel and Bengio, Samy and Susskind, Joshua},
  booktitle = {Neural Information Processing Systems},
  year      = {2023},
  url       = {https://mlanthology.org/neurips/2023/boixadsera2023neurips-transformers/}
}