A Visual Inductive Priors Framework for Data-Efficient Image Classification

Abstract

State-of-the-art classifiers rely heavily on large-scale datasets, such as ImageNet, JFT-300M, MSCOCO, Open Images, etc. Besides, the performance may decrease significantly because of insufficient learning on a handful of samples. We present Visual Inductive Priors Framework (VIPF), a framework that can learn classifiers from scratch. VIPF can maximize the effectiveness of limited data. In this work, we propose a novel neural network architecture: DSK-net, which is very effective in training from small data sets. With more discriminative feature extracted from DSK-net, overfitting of network is alleviated. Furthermore, a loss function based on positive class as well as an induced hierarchy are also applied to further improve the VIPF’s capability of learning from scratch. Finally, we won the 1st Place in VIPriors image classification competition.

Cite

Text

Sun et al. "A Visual Inductive Priors Framework for Data-Efficient Image Classification." European Conference on Computer Vision Workshops, 2020. doi:10.1007/978-3-030-66096-3_35

Markdown

[Sun et al. "A Visual Inductive Priors Framework for Data-Efficient Image Classification." European Conference on Computer Vision Workshops, 2020.](https://mlanthology.org/eccvw/2020/sun2020eccvw-visual/) doi:10.1007/978-3-030-66096-3_35

BibTeX

@inproceedings{sun2020eccvw-visual,
  title     = {{A Visual Inductive Priors Framework for Data-Efficient Image Classification}},
  author    = {Sun, Pengfei and Jin, Xuan and Su, Wei and He, Yuan and Xue, Hui and Lu, Quan},
  booktitle = {European Conference on Computer Vision Workshops},
  year      = {2020},
  pages     = {511-520},
  doi       = {10.1007/978-3-030-66096-3_35},
  url       = {https://mlanthology.org/eccvw/2020/sun2020eccvw-visual/}
}