SCMT: Self-Correction Mean Teacher for Semi-Supervised Object Detection
Abstract
Semi-Supervised Object Detection (SSOD) aims to improve performance by leveraging a large amount of unlabeled data. Existing works usually adopt the teacher-student framework to enforce student to learn consistent predictions over the pseudo-labels generated by teacher. However, the performance of the student model is limited since the noise inherently exists in pseudo-labels. In this paper, we investigate the causes and effects of noisy pseudo-labels and propose a simple yet effective approach denoted as Self-Correction Mean Teacher(SCMT) to reduce the adverse effects. Specifically, we propose to dynamically re-weight the unsupervised loss of each student's proposal with additional supervision information from the teacher model, and assign smaller loss weights to possible noisy proposals. Extensive experiments on MS-COCO benchmark have shown the superiority of our proposed SCMT, which can significantly improve the supervised baseline by more than 11% mAP under all 1%, 5% and 10% COCO-standard settings, and surpasses state-of-the-art methods by about 1.5% mAP. Even under the challenging COCO-additional setting, SCMT still improves the supervised baseline by 4.9% mAP, and significantly outperforms previous methods by 1.2% mAP, achieving a new state-of-the-art performance.
Cite
Text
Xiong et al. "SCMT: Self-Correction Mean Teacher for Semi-Supervised Object Detection." International Joint Conference on Artificial Intelligence, 2022. doi:10.24963/IJCAI.2022/207Markdown
[Xiong et al. "SCMT: Self-Correction Mean Teacher for Semi-Supervised Object Detection." International Joint Conference on Artificial Intelligence, 2022.](https://mlanthology.org/ijcai/2022/xiong2022ijcai-scmt/) doi:10.24963/IJCAI.2022/207BibTeX
@inproceedings{xiong2022ijcai-scmt,
title = {{SCMT: Self-Correction Mean Teacher for Semi-Supervised Object Detection}},
author = {Xiong, Feng and Tian, Jiayi and Hao, Zhihui and He, Yulin and Ren, Xiaofeng},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2022},
pages = {1488-1494},
doi = {10.24963/IJCAI.2022/207},
url = {https://mlanthology.org/ijcai/2022/xiong2022ijcai-scmt/}
}