A Closer Look at Few-Shot Classification

Abstract

Few-shot classification aims to learn a classifier to recognize unseen classes during training with limited labeled examples. While significant progress has been made, the growing complexity of network designs, meta-learning algorithms, and differences in implementation details make a fair comparison difficult. In this paper, we present 1) a consistent comparative analysis of several representative few-shot classification algorithms, with results showing that deeper backbones significantly reduce the gap across methods including the baseline, 2) a slightly modified baseline method that surprisingly achieves competitive performance when compared with the state-of-the-art on both the mini-ImageNet and the CUB datasets, and 3) a new experimental setting for evaluating the cross-domain generalization ability for few-shot classification algorithms. Our results reveal that reducing intra-class variation is an important factor when the feature backbone is shallow, but not as critical when using deeper backbones. In a realistic, cross-domain evaluation setting, we show that a baseline method with a standard fine-tuning practice compares favorably against other state-of-the-art few-shot learning algorithms.

Cite

Text

Chen et al. "A Closer Look at Few-Shot Classification." International Conference on Learning Representations, 2019.

Markdown

[Chen et al. "A Closer Look at Few-Shot Classification." International Conference on Learning Representations, 2019.](https://mlanthology.org/iclr/2019/chen2019iclr-closer/)

BibTeX

@inproceedings{chen2019iclr-closer,
  title     = {{A Closer Look at Few-Shot Classification}},
  author    = {Chen, Wei-Yu and Liu, Yen-Cheng and Kira, Zsolt and Wang, Yu-Chiang Frank and Huang, Jia-Bin},
  booktitle = {International Conference on Learning Representations},
  year      = {2019},
  url       = {https://mlanthology.org/iclr/2019/chen2019iclr-closer/}
}