Person Re-Identification via Recurrent Feature Aggregation
Abstract
We address the person re-identification problem by effectively exploiting a globally discriminative feature representation from a sequence of tracked human regions/patches. This is in contrast to previous person re-id works, which rely on either single frame based person to person patch matching, or graph based sequence to sequence matching. We show that a progressive/sequential fusion framework based on long short term memory (LSTM) network aggregates the frame-wise human region representation at each time stamp and yields a sequence level human feature representation. Since LSTM nodes can remember and propagate previously accumulated good features and forget newly input inferior ones, even with simple hand-crafted features, the proposed recurrent feature aggregation network (RFA-Net) is effective in generating highly discriminative sequence level human representations. Extensive experimental results on two person re-identification benchmarks demonstrate that the proposed method performs favorably against state-of-the-art person re-identification methods.
Cite
Text
Yan et al. "Person Re-Identification via Recurrent Feature Aggregation." European Conference on Computer Vision, 2016. doi:10.1007/978-3-319-46466-4_42Markdown
[Yan et al. "Person Re-Identification via Recurrent Feature Aggregation." European Conference on Computer Vision, 2016.](https://mlanthology.org/eccv/2016/yan2016eccv-person/) doi:10.1007/978-3-319-46466-4_42BibTeX
@inproceedings{yan2016eccv-person,
title = {{Person Re-Identification via Recurrent Feature Aggregation}},
author = {Yan, Yichao and Ni, Bingbing and Song, Zhichao and Ma, Chao and Yan, Yan and Yang, Xiaokang},
booktitle = {European Conference on Computer Vision},
year = {2016},
pages = {701-716},
doi = {10.1007/978-3-319-46466-4_42},
url = {https://mlanthology.org/eccv/2016/yan2016eccv-person/}
}