SEA: Sparse Linear Attention with Estimated Attention Mask
Abstract
The transformer architecture has driven breakthroughs in recent years on tasks which require modeling pairwise relationships between sequential elements, as is the case in natural language understanding. However, long seqeuences pose a problem due to the quadratic complexity of the attention operation. Previous re- search has aimed to lower the complexity by sparsifying or linearly approximating the attention matrix. Yet, these approaches cannot straightforwardly distill knowl- edge from a teacher’s attention matrix, and often require complete retraining from scratch. Furthermore, previous sparse and linear approaches lose interpretability if they cannot produce full attention matrices. To address these challenges, we propose SEA: Sparse linear attention with an Estimated Attention mask. SEA estimates the attention matrix with linear complexity via kernel-based linear at- tention, then subsequently creates a sparse attention matrix with a top-k̂ selection to perform a sparse attention operation. For language modeling tasks (Wikitext2), previous linear and sparse attention methods show roughly two-fold worse per- plexity scores over the quadratic OPT-1.3B baseline, while SEA achieves better perplexity than OPT-1.3B, using roughly half the memory of OPT-1.3B. More- over, SEA maintains an interpretable attention matrix and can utilize knowledge distillation to lower the complexity of existing pretrained transformers. We be- lieve that our work will have a large practical impact, as it opens the possibility of running large transformers on resource-limited devices with less memory. Code: https://github.com/gmlwns2000/sea-attention
Cite
Text
Lee et al. "SEA: Sparse Linear Attention with Estimated Attention Mask." International Conference on Learning Representations, 2024.Markdown
[Lee et al. "SEA: Sparse Linear Attention with Estimated Attention Mask." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/lee2024iclr-sea/)BibTeX
@inproceedings{lee2024iclr-sea,
title = {{SEA: Sparse Linear Attention with Estimated Attention Mask}},
author = {Lee, Heejun and Kim, Jina and Willette, Jeffrey and Hwang, Sung Ju},
booktitle = {International Conference on Learning Representations},
year = {2024},
url = {https://mlanthology.org/iclr/2024/lee2024iclr-sea/}
}