Rethinking Few-Shot Image Classification: A Good Embedding Is All You Need?

Abstract

The focus of recent meta-learning research has been on the development of learning algorithms that can quickly adapt to test time tasks with limited data and low computational cost. Few-shot learning is widely used as one of the standard benchmarks in meta-learning. In this work, we show that a simple baseline: learning a supervised or self-supervised representation on the meta-training set, followed by training a linear classifier on top of this representation, outperforms state-of-the-art few-shot learning methods. An additional boost can be achieved through the use of self-distillation. This demonstrates that using a good learned embedding model can be more effective than sophisticated meta-learning algorithms. We believe that our findings motivate a rethinking of few-shot image classification benchmarks and the associated role of meta-learning algorithms.

Cite

Text

Tian et al. "Rethinking Few-Shot Image Classification: A Good Embedding Is All You Need?." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58568-6_16

Markdown

[Tian et al. "Rethinking Few-Shot Image Classification: A Good Embedding Is All You Need?." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/tian2020eccv-rethinking/) doi:10.1007/978-3-030-58568-6_16

BibTeX

@inproceedings{tian2020eccv-rethinking,
  title     = {{Rethinking Few-Shot Image Classification: A Good Embedding Is All You Need?}},
  author    = {Tian, Yonglong and Wang, Yue and Krishnan, Dilip and Tenenbaum, Joshua B. and Isola, Phillip},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2020},
  doi       = {10.1007/978-3-030-58568-6_16},
  url       = {https://mlanthology.org/eccv/2020/tian2020eccv-rethinking/}
}