ZeroS: Zero‑Sum Linear Attention for Efficient Transformers
Abstract
Linear attention methods offer Transformers $O(N)$ complexity but typically underperform standard softmax attention. We identify two fundamental limitations affecting these approaches: the restriction to convex combinations that only permits additive information blending, and uniform accumulated weight bias that dilutes attention in long contexts. We propose Zero-Sum Linear Attention (ZeroS), which addresses these limitations by removing the constant zero-order term $1/t$ and reweighting the remaining zero-sum softmax residuals. This modification creates mathematically stable weights, enabling both positive and negative values and allowing a single attention layer to perform contrastive operations. While maintaining $O(N)$ complexity, ZeroS theoretically expands the set of representable functions compared to convex combinations. Empirically, it matches or exceeds standard softmax attention across various sequence modeling benchmarks.
Cite
Text
Lu et al. "ZeroS: Zero‑Sum Linear Attention for Efficient Transformers." Advances in Neural Information Processing Systems, 2025.Markdown
[Lu et al. "ZeroS: Zero‑Sum Linear Attention for Efficient Transformers." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/lu2025neurips-zeros/)BibTeX
@inproceedings{lu2025neurips-zeros,
title = {{ZeroS: Zero‑Sum Linear Attention for Efficient Transformers}},
author = {Lu, Jiecheng and Han, Xu and Sun, Yan and Pati, Viresh and Kim, Yubin and Somani, Siddhartha and Yang, Shihao},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/lu2025neurips-zeros/}
}