Memorization in Attention-Only Transformers
Abstract
Recent research has explored the memorization capacity of multi-head attention, but these findings are constrained by unrealistic limitations on the context size. We present a novel proof for language-based Transformers that extends the current hypothesis to any context size. Our approach improves upon the state-of-the-art by achieving more effective exact memorization with an attention layer, while also introducing the concept of approximate memorization of distributions. Through experimental validation, we demonstrate that our proposed bounds more accurately reflect the true memorization capacity of language models, and provide a precise comparison with prior work.
Cite
Text
Dana et al. "Memorization in Attention-Only Transformers." Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, 2025.Markdown
[Dana et al. "Memorization in Attention-Only Transformers." Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, 2025.](https://mlanthology.org/aistats/2025/dana2025aistats-memorization/)BibTeX
@inproceedings{dana2025aistats-memorization,
title = {{Memorization in Attention-Only Transformers}},
author = {Dana, Léo and Pydi, Muni Sreenivas and Chevaleyre, Yann},
booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics},
year = {2025},
pages = {3133-3141},
volume = {258},
url = {https://mlanthology.org/aistats/2025/dana2025aistats-memorization/}
}