Scale-Invariant Attention
Abstract
One persistent challenge in LLM research is the development of attention mechanisms that are able to generalise from training on shorter contexts to inference on longer contexts. We propose two conditions that we expect all effective long-context attention mechanisms to have: scale-invariant total attention, and scale-invariant attention sparsity. Under a Gaussian assumption, we show that a simple position-dependent transformation of the attention logits is sufficient for these conditions to hold. Experimentally we find that the resulting scale-invariant attention scheme gives considerable benefits in terms of validation loss when zero-shot generalising from training on short contexts to validation on longer contexts, and is effective at long-context retrieval.
Cite
Text
Anson et al. "Scale-Invariant Attention." Advances in Neural Information Processing Systems, 2025.Markdown
[Anson et al. "Scale-Invariant Attention." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/anson2025neurips-scaleinvariant/)BibTeX
@inproceedings{anson2025neurips-scaleinvariant,
title = {{Scale-Invariant Attention}},
author = {Anson, Ben and Wang, Xi and Aitchison, Laurence},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/anson2025neurips-scaleinvariant/}
}