Streaming Attention Approximation via Discrepancy Theory
Abstract
Large language models (LLMs) have achieved impressive success, but their high memory requirements present challenges for long-context token generation. In this paper we study the streaming complexity of attention approximation, a key computational primitive underlying token generation. Our main contribution is BalanceKV, a streaming algorithm for $\epsilon$-approximating attention computations based on geometric process for selecting a balanced collection of Key and Value tokens as per Banaszczyk's vector balancing theory. We complement our algorithm with space lower bounds for streaming attention computation. Besides strong theoretical guarantees, BalanceKV exhibits empirically validated performance improvements over existing methods, both for attention approximation and end-to-end performance on various long context benchmarks.
Cite
Text
Kochetkova et al. "Streaming Attention Approximation via Discrepancy Theory." Advances in Neural Information Processing Systems, 2025.Markdown
[Kochetkova et al. "Streaming Attention Approximation via Discrepancy Theory." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/kochetkova2025neurips-streaming/)BibTeX
@inproceedings{kochetkova2025neurips-streaming,
title = {{Streaming Attention Approximation via Discrepancy Theory}},
author = {Kochetkova, Ekaterina and Sheth, Kshiteej and Han, Insu and Zandieh, Amir and Kapralov, Michael},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/kochetkova2025neurips-streaming/}
}