POODLE: Improving Few-Shot Learning via Penalizing Out-of-Distribution Samples

Abstract

In this work, we propose to use out-of-distribution samples, i.e., unlabeled samples coming from outside the target classes, to improve few-shot learning. Specifically, we exploit the easily available out-of-distribution samples to drive the classifier to avoid irrelevant features by maximizing the distance from prototypes to out-of-distribution samples while minimizing that of in-distribution samples (i.e., support, query data). Our approach is simple to implement, agnostic to feature extractors, lightweight without any additional cost for pre-training, and applicable to both inductive and transductive settings. Extensive experiments on various standard benchmarks demonstrate that the proposed method consistently improves the performance of pretrained networks with different architectures.

Cite

Text

Le et al. "POODLE: Improving Few-Shot Learning via Penalizing Out-of-Distribution Samples." Neural Information Processing Systems, 2021.

Markdown

[Le et al. "POODLE: Improving Few-Shot Learning via Penalizing Out-of-Distribution Samples." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/le2021neurips-poodle/)

BibTeX

@inproceedings{le2021neurips-poodle,
  title     = {{POODLE: Improving Few-Shot Learning via Penalizing Out-of-Distribution Samples}},
  author    = {Le, Duong and Nguyen, Khoi Duc and Nguyen, Khoi and Tran, Quoc-Huy and Nguyen, Rang and Hua, Binh-Son},
  booktitle = {Neural Information Processing Systems},
  year      = {2021},
  url       = {https://mlanthology.org/neurips/2021/le2021neurips-poodle/}
}