Unsupervised Salience Learning for Person Re-Identification

Abstract

Human eyes can recognize person identities based on some small salient regions. However, such valuable salient information is often hidden when computing similarities of images with existing approaches. Moreover, many existing approaches learn discriminative features and handle drastic viewpoint change in a supervised way and require labeling new training data for a different pair of camera views. In this paper, we propose a novel perspective for person re-identification based on unsupervised salience learning. Distinctive features are extracted without requiring identity labels in the training procedure. First, we apply adjacency constrained patch matching to build dense correspondence between image pairs, which shows effectiveness in handling misalignment caused by large viewpoint and pose variations. Second, we learn human salience in an unsupervised manner. To improve the performance of person re-identification, human salience is incorporated in patch matching to find reliable and discriminative matched patches. The effectiveness of our approach is validated on the widely used VIPeR dataset and ETHZ dataset.

Cite

Text

Zhao et al. "Unsupervised Salience Learning for Person Re-Identification." Conference on Computer Vision and Pattern Recognition, 2013. doi:10.1109/CVPR.2013.460

Markdown

[Zhao et al. "Unsupervised Salience Learning for Person Re-Identification." Conference on Computer Vision and Pattern Recognition, 2013.](https://mlanthology.org/cvpr/2013/zhao2013cvpr-unsupervised/) doi:10.1109/CVPR.2013.460

BibTeX

@inproceedings{zhao2013cvpr-unsupervised,
  title     = {{Unsupervised Salience Learning for Person Re-Identification}},
  author    = {Zhao, Rui and Ouyang, Wanli and Wang, Xiaogang},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2013},
  doi       = {10.1109/CVPR.2013.460},
  url       = {https://mlanthology.org/cvpr/2013/zhao2013cvpr-unsupervised/}
}