Variation Generalized Feature Learning via Intra-View Variation Adaptation
Abstract
This paper addresses the variation generalized feature learning problem in unsupervised video-based person re-identification (re-ID). With advanced tracking and detection algorithms, large-scale intra-view positive samples can be easily collected by assuming that the image frames within the tracking sequence belong to the same person. Existing methods either directly use the intra-view positives to model cross-view variations or simply minimize the intra-view variations to capture the invariant component with some discriminative information loss. In this paper, we propose a Variation Generalized Feature Learning (VGFL) method to learn adaptable feature representation with intra-view positives. The proposed method can learn a discriminative re-ID model without any manually annotated cross-view positive sample pairs. It could address the unseen testing variations with a novel variation generalized feature learning algorithm. In addition, an Adaptability-Discriminability (AD) fusion method is introduced to learn adaptable video-level features. Extensive experiments on different datasets demonstrate the effectiveness of the proposed method.
Cite
Text
Li et al. "Variation Generalized Feature Learning via Intra-View Variation Adaptation." International Joint Conference on Artificial Intelligence, 2019. doi:10.24963/IJCAI.2019/116Markdown
[Li et al. "Variation Generalized Feature Learning via Intra-View Variation Adaptation." International Joint Conference on Artificial Intelligence, 2019.](https://mlanthology.org/ijcai/2019/li2019ijcai-variation/) doi:10.24963/IJCAI.2019/116BibTeX
@inproceedings{li2019ijcai-variation,
title = {{Variation Generalized Feature Learning via Intra-View Variation Adaptation}},
author = {Li, Jiawei and Ye, Mang and Ma, Andy Jinhua and Yuen, Pong C.},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2019},
pages = {826-832},
doi = {10.24963/IJCAI.2019/116},
url = {https://mlanthology.org/ijcai/2019/li2019ijcai-variation/}
}