Adversarial Attribute-Image Person Re-Identification
Abstract
While attributes have been widely used for person re-identification (Re-ID) which aims at matching the same person images across disjoint camera views, they are used either as extra features or for performing multi-task learning to assist the image-image matching task. However, how to find a set of person images according to a given attribute description, which is very practical in many surveillance applications, remains a rarely investigated cross-modality matching problem in person Re-ID. In this work, we present this challenge and leverage adversarial learning to formulate the attribute-image cross-modality person Re-ID model. By imposing a semantic consistency constraint across modalities as a regularization, the adversarial learning enables to generate image-analogous concepts of query attributes for matching the corresponding images at both global level and semantic ID level. We conducted extensive experiments on three attribute datasets and demonstrated that the regularized adversarial modelling is so far the most effective method for the attribute-image cross-modality person Re-ID problem.
Cite
Text
Yin et al. "Adversarial Attribute-Image Person Re-Identification." International Joint Conference on Artificial Intelligence, 2018. doi:10.24963/IJCAI.2018/153Markdown
[Yin et al. "Adversarial Attribute-Image Person Re-Identification." International Joint Conference on Artificial Intelligence, 2018.](https://mlanthology.org/ijcai/2018/yin2018ijcai-adversarial/) doi:10.24963/IJCAI.2018/153BibTeX
@inproceedings{yin2018ijcai-adversarial,
title = {{Adversarial Attribute-Image Person Re-Identification}},
author = {Yin, Zhou and Zheng, Wei-Shi and Wu, Ancong and Yu, Hong-Xing and Wan, Hai and Guo, Xiaowei and Huang, Feiyue and Lai, Jianhuang},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2018},
pages = {1100-1106},
doi = {10.24963/IJCAI.2018/153},
url = {https://mlanthology.org/ijcai/2018/yin2018ijcai-adversarial/}
}