Image Block Augmentation for One-Shot Learning

Abstract

Given one or a few training instances of novel classes, oneshot learning task requires that the classifier generalizes to these novel classes. Directly training one-shot classifier may suffer from insufficient training instances in one-shot learning. Previous one-shot learning works investigate the metalearning or metric-based algorithms; in contrast, this paper proposes a Self-Training Jigsaw Augmentation (Self-Jig) method for one-shot learning. Particularly, we solve one-shot learning by directly augmenting the training images through leveraging the vast unlabeled instances. Precisely our proposed Self-Jig algorithm can synthesize new images from the labeled probe and unlabeled gallery images. The labels of gallery images are predicted to help the augmentation process, which can be taken as a self-training scheme. Intrinsically, we argue that we provide a very useful way of directly generating massive amounts of training images for novel classes. Extensive experiments and ablation study not only evaluate the efficacy but also reveal the insights, of the proposed Self-Jig method.

Cite

Text

Chen et al. "Image Block Augmentation for One-Shot Learning." AAAI Conference on Artificial Intelligence, 2019. doi:10.1609/AAAI.V33I01.33013379

Markdown

[Chen et al. "Image Block Augmentation for One-Shot Learning." AAAI Conference on Artificial Intelligence, 2019.](https://mlanthology.org/aaai/2019/chen2019aaai-image/) doi:10.1609/AAAI.V33I01.33013379

BibTeX

@inproceedings{chen2019aaai-image,
  title     = {{Image Block Augmentation for One-Shot Learning}},
  author    = {Chen, Zitian and Fu, Yanwei and Chen, Kaiyu and Jiang, Yu-Gang},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2019},
  pages     = {3379-3386},
  doi       = {10.1609/AAAI.V33I01.33013379},
  url       = {https://mlanthology.org/aaai/2019/chen2019aaai-image/}
}