Beyond Triplet Loss: A Deep Quadruplet Network for Person Re-Identification
Abstract
Person re-identification (ReID) is an important task in wide area video surveillance which focuses on identifying people across different cameras. Recently, deep learning networks with a triplet loss become a common framework for person ReID. However, the triplet loss pays main attentions on obtaining correct orders on the training set. It still suffers from a weaker generalization capability from the training set to the testing set, thus resulting in inferior performance. In this paper, we design a quadruplet loss, which can lead to the model output with a larger inter-class variation and a smaller intra-class variation compared to the triplet loss. As a result, our model has a better generalization ability and can achieve a higher performance on the testing set. In particular, a quadruplet deep network using a margin-based online hard negative mining is proposed based on the quadruplet loss for the person ReID. In extensive experiments, the proposed network outperforms most of the state-of-the-art algorithms on representative datasets which clearly demonstrates the effectiveness of our proposed method.
Cite
Text
Chen et al. "Beyond Triplet Loss: A Deep Quadruplet Network for Person Re-Identification." Conference on Computer Vision and Pattern Recognition, 2017. doi:10.1109/CVPR.2017.145Markdown
[Chen et al. "Beyond Triplet Loss: A Deep Quadruplet Network for Person Re-Identification." Conference on Computer Vision and Pattern Recognition, 2017.](https://mlanthology.org/cvpr/2017/chen2017cvpr-beyond/) doi:10.1109/CVPR.2017.145BibTeX
@inproceedings{chen2017cvpr-beyond,
title = {{Beyond Triplet Loss: A Deep Quadruplet Network for Person Re-Identification}},
author = {Chen, Weihua and Chen, Xiaotang and Zhang, Jianguo and Huang, Kaiqi},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2017},
doi = {10.1109/CVPR.2017.145},
url = {https://mlanthology.org/cvpr/2017/chen2017cvpr-beyond/}
}