Matching Feature Sets for Few-Shot Image Classification

Abstract

In image classification, it is common practice to train deep networks to extract a single feature vector per input image. Few-shot classification methods also mostly follow this trend. In this work, we depart from this established direction and instead propose to extract sets of feature vectors for each image. We argue a set-based representation intrinsically builds a richer representation of images from the base classes, which can subsequently better transfer to the few-shot classes. To do so, we propose to adapt existing feature extractors to instead produce sets of feature vectors from images. Our approach, dubbed SetFeat, embeds shallow self-attention mechanisms inside existing encoder architectures. The attention modules are lightweight, and as such our method results in encoders that have approximately the same number of parameters as their original versions. During training and inference, a set-to-set matching metric is used to perform image classification. The effectiveness of our proposed architecture and metrics is demonstrated via thorough experiments on standard few-shot datasets--namely miniImageNet, tieredImageNet, and CUB--in both the 1- and 5-shot scenarios. In all cases but one, our method outperforms the state-of-the-art.

Cite

Text

Afrasiyabi et al. "Matching Feature Sets for Few-Shot Image Classification." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.00881

Markdown

[Afrasiyabi et al. "Matching Feature Sets for Few-Shot Image Classification." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/afrasiyabi2022cvpr-matching/) doi:10.1109/CVPR52688.2022.00881

BibTeX

@inproceedings{afrasiyabi2022cvpr-matching,
  title     = {{Matching Feature Sets for Few-Shot Image Classification}},
  author    = {Afrasiyabi, Arman and Larochelle, Hugo and Lalonde, Jean-François and Gagné, Christian},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2022},
  pages     = {9014-9024},
  doi       = {10.1109/CVPR52688.2022.00881},
  url       = {https://mlanthology.org/cvpr/2022/afrasiyabi2022cvpr-matching/}
}