Learning with Twin Noisy Labels for Visible-Infrared Person Re-Identification

Abstract

In this paper, we study an untouched problem in visible-infrared person re-identification (VI-ReID), namely, Twin Noise Labels (TNL) which refers to as noisy annotation and correspondence. In brief, on the one hand, it is inevitable to annotate some persons with the wrong identity due to the complexity in data collection and annotation, e.g., the poor recognizability in the infrared modality. On the other hand, the wrongly annotated data in a single modality will eventually contaminate the cross-modal correspondence, thus leading to noisy correspondence. To solve the TNL problem, we propose a novel method for robust VI-ReID, termed DuAlly Robust Training (DART). In brief, DART first computes the clean confidence of annotations by resorting to the memorization effect of deep neural networks. Then, the proposed method rectifies the noisy correspondence with the estimated confidence and further divides the data into four groups for further utilizations. Finally, DART employs a novel dually robust loss consisting of a soft identification loss and an adaptive quadruplet loss to achieve robustness on the noisy annotation and noisy correspondence. Extensive experiments on SYSU-MM01 and RegDB datasets verify the effectiveness of our method against the twin noisy labels compared with five state-of-the-art methods. The code could be accessed from https://github.com/XLearning-SCU/2022-CVPR-DART.

Cite

Text

Yang et al. "Learning with Twin Noisy Labels for Visible-Infrared Person Re-Identification." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.01391

Markdown

[Yang et al. "Learning with Twin Noisy Labels for Visible-Infrared Person Re-Identification." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/yang2022cvpr-learning/) doi:10.1109/CVPR52688.2022.01391

BibTeX

@inproceedings{yang2022cvpr-learning,
  title     = {{Learning with Twin Noisy Labels for Visible-Infrared Person Re-Identification}},
  author    = {Yang, Mouxing and Huang, Zhenyu and Hu, Peng and Li, Taihao and Lv, Jiancheng and Peng, Xi},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2022},
  pages     = {14308-14317},
  doi       = {10.1109/CVPR52688.2022.01391},
  url       = {https://mlanthology.org/cvpr/2022/yang2022cvpr-learning/}
}