Resolution-Invariant Person Re-Identification
Abstract
Exploiting resolution invariant representation is critical for person Re-Identification (ReID) in real applications, where the resolutions of captured person images may vary dramatically. This paper learns person representations robust to resolution variance through jointly training a Foreground-Focus Super-Resolution (FFSR) module and a Resolution-Invariant Feature Extractor (RIFE) by end-to-end CNN learning. FFSR upscales the person foreground using a fully convolutional auto-encoder with skip connections learned with a foreground focus training loss. RIFE adopts two feature extraction streams weighted by a dual-attention block to learn features for low and high resolution images, respectively. These two complementary modules are jointly trained, leading to a strong resolution invariant representation. We evaluate our methods on five datasets containing person images at a large range of resolutions, where our methods show substantial superiority to existing solutions. For instance, we achieve Rank-1 accuracy of 36.4% and 73.3% on CAVIAR and MLR-CUHK03, outperforming the state-of-the art by 2.9% and 2.6%, respectively.
Cite
Text
Mao et al. "Resolution-Invariant Person Re-Identification." International Joint Conference on Artificial Intelligence, 2019. doi:10.24963/IJCAI.2019/124Markdown
[Mao et al. "Resolution-Invariant Person Re-Identification." International Joint Conference on Artificial Intelligence, 2019.](https://mlanthology.org/ijcai/2019/mao2019ijcai-resolution/) doi:10.24963/IJCAI.2019/124BibTeX
@inproceedings{mao2019ijcai-resolution,
title = {{Resolution-Invariant Person Re-Identification}},
author = {Mao, Shunan and Zhang, Shiliang and Yang, Ming},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2019},
pages = {883-889},
doi = {10.24963/IJCAI.2019/124},
url = {https://mlanthology.org/ijcai/2019/mao2019ijcai-resolution/}
}