Learning from Small Sample Sets by Combining Unsupervised Meta-Training with CNNs
Abstract
This work explores CNNs for the recognition of novel categories from few examples. Inspired by the transferability properties of CNNs, we introduce an additional unsupervised meta-training stage that exposes multiple top layer units to a large amount of unlabeled real-world images. By encouraging these units to learn diverse sets of low-density separators across the unlabeled data, we capture a more generic, richer description of the visual world, which decouples these units from ties to a specific set of categories. We propose an unsupervised margin maximization that jointly estimates compact high-density regions and infers low-density separators. The low-density separator (LDS) modules can be plugged into any or all of the top layers of a standard CNN architecture. The resulting CNNs significantly improve the performance in scene classification, fine-grained recognition, and action recognition with small training samples.
Cite
Text
Wang and Hebert. "Learning from Small Sample Sets by Combining Unsupervised Meta-Training with CNNs." Neural Information Processing Systems, 2016.Markdown
[Wang and Hebert. "Learning from Small Sample Sets by Combining Unsupervised Meta-Training with CNNs." Neural Information Processing Systems, 2016.](https://mlanthology.org/neurips/2016/wang2016neurips-learning/)BibTeX
@inproceedings{wang2016neurips-learning,
title = {{Learning from Small Sample Sets by Combining Unsupervised Meta-Training with CNNs}},
author = {Wang, Yu-Xiong and Hebert, Martial},
booktitle = {Neural Information Processing Systems},
year = {2016},
pages = {244-252},
url = {https://mlanthology.org/neurips/2016/wang2016neurips-learning/}
}