Hydra Attention: Efficient Attention with Many Heads

Abstract

While transformers have begun to dominate many tasks in vision, applying them to large images is still computationally difficult. A large reason for this is that self-attention scales quadratically with the number of tokens, which in turn, scales quadratically with the image size. On larger images (e.g., 1080p), over 60% of the total computation in the network is spent solely on creating and applying attention matrices. We take a step toward solving this issue by introducing Hydra Attention, an extremely efficient attention operation for Vision Transformers (ViTs). Paradoxically, this efficiency comes from taking multi-head attention to its extreme: by using as many attention heads as there are features, Hydra Attention is computationally linear in both tokens and features with no hidden constants, making it significantly faster than standard self-attention in an off-the-shelf ViT-B/16 by a factor of the token count. Moreover, Hydra Attention retains high accuracy on ImageNet and, in some cases, actually improves it.

Cite

Text

Bolya et al. "Hydra Attention: Efficient Attention with Many Heads." European Conference on Computer Vision Workshops, 2022. doi:10.1007/978-3-031-25082-8_3

Markdown

[Bolya et al. "Hydra Attention: Efficient Attention with Many Heads." European Conference on Computer Vision Workshops, 2022.](https://mlanthology.org/eccvw/2022/bolya2022eccvw-hydra/) doi:10.1007/978-3-031-25082-8_3

BibTeX

@inproceedings{bolya2022eccvw-hydra,
  title     = {{Hydra Attention: Efficient Attention with Many Heads}},
  author    = {Bolya, Daniel and Fu, Cheng-Yang and Dai, Xiaoliang and Zhang, Peizhao and Hoffman, Judy},
  booktitle = {European Conference on Computer Vision Workshops},
  year      = {2022},
  pages     = {35-49},
  doi       = {10.1007/978-3-031-25082-8_3},
  url       = {https://mlanthology.org/eccvw/2022/bolya2022eccvw-hydra/}
}