Rethinking Attention with Performers
Abstract
We introduce Performers, Transformer architectures which can estimate regular (softmax) full-rank-attention Transformers with provable accuracy, but using only linear (as opposed to quadratic) space and time complexity, without relying on any priors such as sparsity or low-rankness. To approximate softmax attention-kernels, Performers use a novel Fast Attention Via positive Orthogonal Random features approach (FAVOR+), which may be of independent interest for scalable kernel methods. FAVOR+ can also be used to efficiently model kernelizable attention mechanisms beyond softmax. This representational power is crucial to accurately compare softmax with other kernels for the first time on large-scale tasks, beyond the reach of regular Transformers, and investigate optimal attention-kernels. Performers are linear architectures fully compatible with regular Transformers and with strong theoretical guarantees: unbiased or nearly-unbiased estimation of the attention matrix, uniform convergence and low estimation variance. We tested Performers on a rich set of tasks stretching from pixel-prediction through text models to protein sequence modeling. We demonstrate competitive results with other examined efficient sparse and dense attention methods, showcasing effectiveness of the novel attention-learning paradigm leveraged by Performers.
Cite
Text
Choromanski et al. "Rethinking Attention with Performers." International Conference on Learning Representations, 2021.Markdown
[Choromanski et al. "Rethinking Attention with Performers." International Conference on Learning Representations, 2021.](https://mlanthology.org/iclr/2021/choromanski2021iclr-rethinking/)BibTeX
@inproceedings{choromanski2021iclr-rethinking,
title = {{Rethinking Attention with Performers}},
author = {Choromanski, Krzysztof Marcin and Likhosherstov, Valerii and Dohan, David and Song, Xingyou and Gane, Andreea and Sarlos, Tamas and Hawkins, Peter and Davis, Jared Quincy and Mohiuddin, Afroz and Kaiser, Lukasz and Belanger, David Benjamin and Colwell, Lucy J and Weller, Adrian},
booktitle = {International Conference on Learning Representations},
year = {2021},
url = {https://mlanthology.org/iclr/2021/choromanski2021iclr-rethinking/}
}