Unified Pre-Training with Pseudo Texts for Text-to-Image Person Re-Identification
Abstract
The pre-training task is indispensable for the text-to-image person re-identification (T2I-ReID) task. However, there are two underlying inconsistencies between these two tasks that may impact the performance: i) Data inconsistency. A large domain gap exists between the generic images/texts used in public pre-trained models and the specific person data in the T2I-ReID task. This gap is especially severe for texts, as general textual data are usually unable to describe specific people in fine-grained detail. ii) Training inconsistency. The processes of pre-training of images and texts are independent, despite cross-modality learning being critical to T2I-ReID. To address the above issues, we present a new unified pre-training pipeline (UniPT) designed specifically for the T2I-ReID task. We first build a large-scale text-labeled person dataset "LUPerson-T", in which pseudo-textual descriptions of images are automatically generated by the CLIP paradigm using a divide-conquer-combine strategy. Benefiting from this dataset, we then utilize a simple vision-and-language pre-training framework to explicitly align the feature space of the image and text modalities during pre-training. In this way, the pre-training task and the T2I-ReID task are made consistent with each other on both data and training levels. Without the need for any bells and whistles, our UniPT achieves competitive Rank-1 accuracy of, i.e., 68.50%, 60.09%, and 51.85% on CUHK-PEDES, ICFG-PEDES and RSTPReid, respectively. Both the LUPerson-T dataset and code are available at https://github.com/ZhiyinShao-H/UniPT.
Cite
Text
Shao et al. "Unified Pre-Training with Pseudo Texts for Text-to-Image Person Re-Identification." International Conference on Computer Vision, 2023. doi:10.1109/ICCV51070.2023.01026Markdown
[Shao et al. "Unified Pre-Training with Pseudo Texts for Text-to-Image Person Re-Identification." International Conference on Computer Vision, 2023.](https://mlanthology.org/iccv/2023/shao2023iccv-unified/) doi:10.1109/ICCV51070.2023.01026BibTeX
@inproceedings{shao2023iccv-unified,
title = {{Unified Pre-Training with Pseudo Texts for Text-to-Image Person Re-Identification}},
author = {Shao, Zhiyin and Zhang, Xinyu and Ding, Changxing and Wang, Jian and Wang, Jingdong},
booktitle = {International Conference on Computer Vision},
year = {2023},
pages = {11174-11184},
doi = {10.1109/ICCV51070.2023.01026},
url = {https://mlanthology.org/iccv/2023/shao2023iccv-unified/}
}