An Embarrassingly Simple Baseline to One-Shot Learning
Abstract
In this paper, we propose an embarrassingly simple approach for one-shot learning. Our insight is that the one-shot tasks have domain gap to the network pretrained tasks and thus some features from the pretrained network are not relevant, or harmful to the specific one-shot task. Therefore, we propose to directly prune the features from the pretrained network for a specific one-shot task rather than update it via an optimized scheme with complex network structure. Without bells and whistles, our simple yet effective method achieves leading performances on miniImageNet (60.63%) and tieredImageNet (69.02%) for 5-way one-shot setting. The best trial can hit to 66.83% on miniImageNet and 74.04% on tieredImageNet, establishing a new state-of-the-art. We strongly advocate that our method can serve as a strong baseline for one-shot learning. The codes and trained models will be released at http://github.com/corwinliu9669/embarrassingly-simple-baseline.
Cite
Text
Liu et al. "An Embarrassingly Simple Baseline to One-Shot Learning." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020. doi:10.1109/CVPRW50498.2020.00469Markdown
[Liu et al. "An Embarrassingly Simple Baseline to One-Shot Learning." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020.](https://mlanthology.org/cvprw/2020/liu2020cvprw-embarrassingly/) doi:10.1109/CVPRW50498.2020.00469BibTeX
@inproceedings{liu2020cvprw-embarrassingly,
title = {{An Embarrassingly Simple Baseline to One-Shot Learning}},
author = {Liu, Chen and Xu, Chengming and Wang, Yikai and Zhang, Li and Fu, Yanwei},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2020},
pages = {4005-4009},
doi = {10.1109/CVPRW50498.2020.00469},
url = {https://mlanthology.org/cvprw/2020/liu2020cvprw-embarrassingly/}
}