Learning Correlation Structures for Vision Transformers

Abstract

We introduce a new attention mechanism dubbed structural self-attention (StructSA) that leverages rich correlation patterns naturally emerging in key-query interactions of attention. StructSA generates attention maps by recognizing space-time structures of key-query correlations via convolution and uses them to dynamically aggregate local contexts of value features. This effectively leverages rich structural patterns in images and videos such as scene layouts object motion and inter-object relations.Using StructSA as a main building block we develop the structural vision transformer (StructViT) and evaluate its effectiveness on both image and video classification tasks achieving state-of-the-art results on ImageNet-1K Kinetics-400 Something-Something V1 & V2 Diving-48 and FineGym.

Cite

Text

Kim et al. "Learning Correlation Structures for Vision Transformers." Conference on Computer Vision and Pattern Recognition, 2024. doi:10.1109/CVPR52733.2024.01792

Markdown

[Kim et al. "Learning Correlation Structures for Vision Transformers." Conference on Computer Vision and Pattern Recognition, 2024.](https://mlanthology.org/cvpr/2024/kim2024cvpr-learning-a/) doi:10.1109/CVPR52733.2024.01792

BibTeX

@inproceedings{kim2024cvpr-learning-a,
  title     = {{Learning Correlation Structures for Vision Transformers}},
  author    = {Kim, Manjin and Seo, Paul Hongsuck and Schmid, Cordelia and Cho, Minsu},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2024},
  pages     = {18941-18951},
  doi       = {10.1109/CVPR52733.2024.01792},
  url       = {https://mlanthology.org/cvpr/2024/kim2024cvpr-learning-a/}
}