Meta-Reinforced Synthetic Data for One-Shot Fine-Grained Visual Recognition
Abstract
This paper studies the task of one-shot fine-grained recognition, which suffers from the problem of data scarcity of novel fine-grained classes. To alleviate this problem, a off-the-shelf image generator can be applied to synthesize additional images to help one-shot learning. However, such synthesized images may not be helpful in one-shot fine-grained recognition, due to a large domain discrepancy between synthesized and original images. To this end, this paper proposes a meta-learning framework to reinforce the generated images by original images so that these images can facilitate one-shot learning. Specifically, the generic image generator is updated by few training instances of novel classes; and a Meta Image Reinforcing Network (MetaIRNet) is proposed to conduct one-shot fine-grained recognition as well as image reinforcement. The model is trained in an end-to-end manner, and our experiments demonstrate consistent improvement over baseline on one-shot fine-grained image classification benchmarks.
Cite
Text
Tsutsui et al. "Meta-Reinforced Synthetic Data for One-Shot Fine-Grained Visual Recognition." Neural Information Processing Systems, 2019.Markdown
[Tsutsui et al. "Meta-Reinforced Synthetic Data for One-Shot Fine-Grained Visual Recognition." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/tsutsui2019neurips-metareinforced/)BibTeX
@inproceedings{tsutsui2019neurips-metareinforced,
title = {{Meta-Reinforced Synthetic Data for One-Shot Fine-Grained Visual Recognition}},
author = {Tsutsui, Satoshi and Fu, Yanwei and Crandall, David},
booktitle = {Neural Information Processing Systems},
year = {2019},
pages = {3063-3072},
url = {https://mlanthology.org/neurips/2019/tsutsui2019neurips-metareinforced/}
}