Adaptive Token Sampling for Efficient Vision Transformers

Abstract

While state-of-the-art vision transformer models achieve promising results in image classification, they are computationally expensive and require many GFLOPs. Although the GFLOPs of a vision transformer can be decreased by reducing the number of tokens in the network, there is no setting that is optimal for all input images. In this work, we therefore introduce a differentiable parameter-free Adaptive Token Sampler (ATS) module, which can be plugged into any existing vision transformer architecture. ATS empowers vision transformers by scoring and adaptively sampling significant tokens. As a result, the number of tokens is not constant anymore and varies for each input image. By integrating ATS as an additional layer within the current transformer blocks, we can convert them into much more efficient vision transformers with an adaptive number of tokens. Since ATS is a parameter-free module, it can be added to the off-the-shelf pre-trained vision transformers as a plug and play module, thus reducing their GFLOPs without any additional training. Moreover, due to its differentiable design, one can also train a vision transformer equipped with ATS. We evaluate the efficiency of our module in both image and video classification tasks by adding it to multiple SOTA vision transformers. Our proposed module improves the SOTA by reducing their computational costs (GFLOPs) by 2$\times$, while preserving their accuracy on the ImageNet, Kinetics-400, and Kinetics-600 datasets.

Cite

Text

Fayyaz et al. "Adaptive Token Sampling for Efficient Vision Transformers." Proceedings of the European Conference on Computer Vision (ECCV), 2022. doi:10.1007/978-3-031-20083-0_24

Markdown

[Fayyaz et al. "Adaptive Token Sampling for Efficient Vision Transformers." Proceedings of the European Conference on Computer Vision (ECCV), 2022.](https://mlanthology.org/eccv/2022/fayyaz2022eccv-adaptive/) doi:10.1007/978-3-031-20083-0_24

BibTeX

@inproceedings{fayyaz2022eccv-adaptive,
  title     = {{Adaptive Token Sampling for Efficient Vision Transformers}},
  author    = {Fayyaz, Mohsen and Koohpayegani, Soroush Abbasi and Jafari, Farnoush Rezaei and Sengupta, Sunando and Joze, Hamid Reza Vaezi and Sommerlade, Eric and Pirsiavash, Hamed and Gall, Jürgen},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2022},
  doi       = {10.1007/978-3-031-20083-0_24},
  url       = {https://mlanthology.org/eccv/2022/fayyaz2022eccv-adaptive/}
}