Grafting Vision Transformers
Abstract
Vision Transformers (ViTs) have recently become the state-of-the-art across many computer vision tasks. In contrast to convolutional networks (CNNs), ViTs enable global information sharing even within shallow layers of a network, i.e., among high-resolution features. However, this perk was later overlooked with the success of pyramid architectures such as Swin Transformer, which show better performance-complexity trade-offs. In this paper, we present a simple and efficient add-on component (termed GrafT) that considers global dependencies and multi-scale information throughout the network, in both high- and low-resolution features alike. It has the flexibility of branching out at arbitrary depths and shares most of the parameters and computations of the backbone. GrafT shows consistent gains over various well-known models which includes both hybrid and pure Transformer types, both homogeneous and pyramid structures, and various self-attention methods. In particular, it largely benefits mobile-size models by providing high-level semantics. On the ImageNet-1k dataset, GrafT delivers +3.9%, +1.4%, and +1.9% top-1 accuracy improvement to DeiT-T, Swin-T, and MobileViT-XXS, respectively. The code and models are at https://github.com/jongwoopark7978/Grafting-Vision-Transformer.
Cite
Text
Park et al. "Grafting Vision Transformers." Winter Conference on Applications of Computer Vision, 2024.Markdown
[Park et al. "Grafting Vision Transformers." Winter Conference on Applications of Computer Vision, 2024.](https://mlanthology.org/wacv/2024/park2024wacv-grafting/)BibTeX
@inproceedings{park2024wacv-grafting,
title = {{Grafting Vision Transformers}},
author = {Park, Jongwoo and Kahatapitiya, Kumara and Kim, Donghyun and Sudalairaj, Shivchander and Fan, Quanfu and Ryoo, Michael S.},
booktitle = {Winter Conference on Applications of Computer Vision},
year = {2024},
pages = {1145-1154},
url = {https://mlanthology.org/wacv/2024/park2024wacv-grafting/}
}