TransFG: A Transformer Architecture for Fine-Grained Recognition

Abstract

Fine-grained visual classification (FGVC) which aims at recognizing objects from subcategories is a very challenging task due to the inherently subtle inter-class differences. Most existing works mainly tackle this problem by reusing the backbone network to extract features of detected discriminative regions. However, this strategy inevitably complicates the pipeline and pushes the proposed regions to contain most parts of the objects thus fails to locate the really important parts. Recently, vision transformer (ViT) shows its strong performance in the traditional classification task. The self-attention mechanism of the transformer links every patch token to the classification token. In this work, we first evaluate the effectiveness of the ViT framework in the fine-grained recognition setting. Then motivated by the strength of the attention link can be intuitively considered as an indicator of the importance of tokens, we further propose a novel Part Selection Module that can be applied to most of the transformer architectures where we integrate all raw attention weights of the transformer into an attention map for guiding the network to effectively and accurately select discriminative image patches and compute their relations. A contrastive loss is applied to enlarge the distance between feature representations of confusing classes. We name the augmented transformer-based model TransFG and demonstrate the value of it by conducting experiments on five popular fine-grained benchmarks where we achieve state-of-the-art performance. Qualitative results are presented for better understanding of our model.

Cite

Text

He et al. "TransFG: A Transformer Architecture for Fine-Grained Recognition." AAAI Conference on Artificial Intelligence, 2022. doi:10.1609/AAAI.V36I1.19967

Markdown

[He et al. "TransFG: A Transformer Architecture for Fine-Grained Recognition." AAAI Conference on Artificial Intelligence, 2022.](https://mlanthology.org/aaai/2022/he2022aaai-transfg/) doi:10.1609/AAAI.V36I1.19967

BibTeX

@inproceedings{he2022aaai-transfg,
  title     = {{TransFG: A Transformer Architecture for Fine-Grained Recognition}},
  author    = {He, Ju and Chen, Jieneng and Liu, Shuai and Kortylewski, Adam and Yang, Cheng and Bai, Yutong and Wang, Changhu},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2022},
  pages     = {852-860},
  doi       = {10.1609/AAAI.V36I1.19967},
  url       = {https://mlanthology.org/aaai/2022/he2022aaai-transfg/}
}