Self-Supervised Models Are Strong Industrial Few-Shot Defect Classification Learners

Abstract

In natural scenes, self-supervised pre-training models based on large-scale datasets have been demonstrated to exhibit powerful few-shot classification capabilities. Due to the significant domain differences between industrial and natural scenes, it is often difficult for ImageNet pre-trained models to meet the accuracy requirements in few-shot settings. We constructed a large-scale industrial scenario dataset and carried out self-supervised pre-training based on it. Specifically, a comprehensive industrial dataset, named LusterDataset, was assembled comprising one million images encompassing diverse industrial contexts, including 3C, lithium batteries, and photovoltaics. We adopted a self-supervised learning method based on contrastive learning to carry out pre-training on the above dataset. An advanced augmentation was proposed to generate stable and accurate positive pairs. Benefiting from the powerful few-shot learning capabilities of self-supervised pre-train, we effectively address the issue of few-shot classification in industrial scenarios. A comprehensive series of experiments was conducted on both private and public datasets, the results of which demonstrate that our method significantly improves few-shot defect classification, surpassing other pre-trained methods by more than 2%. The code will be available.

Cite

Text

Yang et al. "Self-Supervised Models Are Strong Industrial Few-Shot Defect Classification Learners." European Conference on Computer Vision Workshops, 2024. doi:10.1007/978-3-031-92805-5_20

Markdown

[Yang et al. "Self-Supervised Models Are Strong Industrial Few-Shot Defect Classification Learners." European Conference on Computer Vision Workshops, 2024.](https://mlanthology.org/eccvw/2024/yang2024eccvw-selfsupervised/) doi:10.1007/978-3-031-92805-5_20

BibTeX

@inproceedings{yang2024eccvw-selfsupervised,
  title     = {{Self-Supervised Models Are Strong Industrial Few-Shot Defect Classification Learners}},
  author    = {Yang, Teng and Gao, Pengcheng and Wang, Jinbao and Tang, Yongliang},
  booktitle = {European Conference on Computer Vision Workshops},
  year      = {2024},
  pages     = {310-327},
  doi       = {10.1007/978-3-031-92805-5_20},
  url       = {https://mlanthology.org/eccvw/2024/yang2024eccvw-selfsupervised/}
}