TapNet: Neural Network Augmented with Task-Adaptive Projection for Few-Shot Learning

Abstract

Handling previously unseen tasks after given only a few training examples continues to be a tough challenge in machine learning. We propose TapNets, neural networks augmented with task-adaptive projection for improved few-shot learning. Here, employing a meta-learning strategy with episode-based training, a network and a set of per-class reference vectors are learned across widely varying tasks. At the same time, for every episode, features in the embedding space are linearly projected into a new space as a form of quick task-specific conditioning. The training loss is obtained based on a distance metric between the query and the reference vectors in the projection space. Excellent generalization results in this way. When tested on the Omniglot, miniImageNet and tieredImageNet datasets, we obtain state of the art classification accuracies under various few-shot scenarios.

Cite

Text

Yoon et al. "TapNet: Neural Network Augmented with Task-Adaptive Projection for Few-Shot Learning." International Conference on Machine Learning, 2019.

Markdown

[Yoon et al. "TapNet: Neural Network Augmented with Task-Adaptive Projection for Few-Shot Learning." International Conference on Machine Learning, 2019.](https://mlanthology.org/icml/2019/yoon2019icml-tapnet/)

BibTeX

@inproceedings{yoon2019icml-tapnet,
  title     = {{TapNet: Neural Network Augmented with Task-Adaptive Projection for Few-Shot Learning}},
  author    = {Yoon, Sung Whan and Seo, Jun and Moon, Jaekyun},
  booktitle = {International Conference on Machine Learning},
  year      = {2019},
  pages     = {7115-7123},
  volume    = {97},
  url       = {https://mlanthology.org/icml/2019/yoon2019icml-tapnet/}
}