Person Re-Identification by Deep Learning Multi-Scale Representations
Abstract
Existing person re-identification (re-id) methods depend mostly on single-scale appearance information. This not only ignores the potentially useful explicit information of other different scales, but also loses the chance of mining the implicit correlated complementary advantages across scales. In this work, we demonstrate the benefits of learning multi-scale person appearance features using Convolutional Neural Networks (CNN) by aiming to jointly learn discriminative scale-specific features and maximise multiscale feature fusion selections in image pyramid inputs. Specifically, we formulate a novel Deep Pyramid Feature Learning (DPFL) CNN architecture for multi-scale appearance feature fusion optimised simultaneously by concurrent per-scale re-id losses and interactive cross-scale consensus regularisation in a closed-loop design. Extensive comparative evaluations demonstrate the re-id advantages of the proposed DPFL model over a wide range of state-of-the-art re-id methods on three benchmarks Market-1501, CUHK03, and DukeMTMC-reID.
Cite
Text
Chen et al. "Person Re-Identification by Deep Learning Multi-Scale Representations." IEEE/CVF International Conference on Computer Vision Workshops, 2017. doi:10.1109/ICCVW.2017.304Markdown
[Chen et al. "Person Re-Identification by Deep Learning Multi-Scale Representations." IEEE/CVF International Conference on Computer Vision Workshops, 2017.](https://mlanthology.org/iccvw/2017/chen2017iccvw-person/) doi:10.1109/ICCVW.2017.304BibTeX
@inproceedings{chen2017iccvw-person,
title = {{Person Re-Identification by Deep Learning Multi-Scale Representations}},
author = {Chen, Yanbei and Zhu, Xiatian and Gong, Shaogang},
booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
year = {2017},
pages = {2590-2600},
doi = {10.1109/ICCVW.2017.304},
url = {https://mlanthology.org/iccvw/2017/chen2017iccvw-person/}
}