SparseFormer: Sparse Visual Recognition via Limited Latent Tokens

Abstract

Human visual recognition is a sparse process, where only a few salient visual cues are attended to rather than every detail being traversed uniformly. However, most current vision networks follow a dense paradigm, processing every single visual unit (such as pixels or patches) in a uniform manner. In this paper, we challenge this dense convention and present a new vision transformer, coined SparseFormer, to explicitly imitate human's sparse visual recognition in an end-to-end manner. SparseFormer learns to represent images using a highly limited number of tokens (e.g., down to $9$) in the latent space with sparse feature sampling procedure instead of processing dense units in the original image space. Therefore, SparseFormer circumvents most of dense operations on the image space and has much lower computational costs. Experiments on the ImageNet-1K classification show that SparseFormer delivers performance on par with canonical or well-established models while offering more favorable accuracy-throughput tradeoff. Moreover, the design of our network can be easily extended to the video classification task with promising performance with lower compute. We hope our work can provide an alternative way for visual modeling and inspire further research on sparse vision architectures. Code and weights are available at https://github.com/showlab/sparseformer.

Cite

Text

Gao et al. "SparseFormer: Sparse Visual Recognition via Limited Latent Tokens." International Conference on Learning Representations, 2024.

Markdown

[Gao et al. "SparseFormer: Sparse Visual Recognition via Limited Latent Tokens." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/gao2024iclr-sparseformer/)

BibTeX

@inproceedings{gao2024iclr-sparseformer,
  title     = {{SparseFormer: Sparse Visual Recognition via Limited Latent Tokens}},
  author    = {Gao, Ziteng and Tong, Zhan and Wang, Limin and Shou, Mike Zheng},
  booktitle = {International Conference on Learning Representations},
  year      = {2024},
  url       = {https://mlanthology.org/iclr/2024/gao2024iclr-sparseformer/}
}