Shape-Erased Feature Learning for Visible-Infrared Person Re-Identification
Abstract
Due to the modality gap between visible and infrared images with high visual ambiguity, learning diverse modality-shared semantic concepts for visible-infrared person re-identification (VI-ReID) remains a challenging problem. Body shape is one of the significant modality-shared cues for VI-ReID. To dig more diverse modality-shared cues, we expect that erasing body-shape-related semantic concepts in the learned features can force the ReID model to extract more and other modality-shared features for identification. To this end, we propose shape-erased feature learning paradigm that decorrelates modality-shared features in two orthogonal subspaces. Jointly learning shape-related feature in one subspace and shape-erased features in the orthogonal complement achieves a conditional mutual information maximization between shape-erased feature and identity discarding body shape information, thus enhancing the diversity of the learned representation explicitly. Extensive experiments on SYSU-MM01, RegDB, and HITSZ-VCM datasets demonstrate the effectiveness of our method.
Cite
Text
Feng et al. "Shape-Erased Feature Learning for Visible-Infrared Person Re-Identification." Conference on Computer Vision and Pattern Recognition, 2023. doi:10.1109/CVPR52729.2023.02179Markdown
[Feng et al. "Shape-Erased Feature Learning for Visible-Infrared Person Re-Identification." Conference on Computer Vision and Pattern Recognition, 2023.](https://mlanthology.org/cvpr/2023/feng2023cvpr-shapeerased/) doi:10.1109/CVPR52729.2023.02179BibTeX
@inproceedings{feng2023cvpr-shapeerased,
title = {{Shape-Erased Feature Learning for Visible-Infrared Person Re-Identification}},
author = {Feng, Jiawei and Wu, Ancong and Zheng, Wei-Shi},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2023},
pages = {22752-22761},
doi = {10.1109/CVPR52729.2023.02179},
url = {https://mlanthology.org/cvpr/2023/feng2023cvpr-shapeerased/}
}