Dynamic Aggregated Network for Gait Recognition

Abstract

Gait recognition is beneficial for a variety of applications, including video surveillance, crime scene investigation, and social security, to mention a few. However, gait recognition often suffers from multiple exterior factors in real scenes, such as carrying conditions, wearing overcoats, and diverse viewing angles. Recently, various deep learning-based gait recognition methods have achieved promising results, but they tend to extract one of the salient features using fixed-weighted convolutional networks, do not well consider the relationship within gait features in key regions, and ignore the aggregation of complete motion patterns. In this paper, we propose a new perspective that actual gait features include global motion patterns in multiple key regions, and each global motion pattern is composed of a series of local motion patterns. To this end, we propose a Dynamic Aggregation Network (DANet) to learn more discriminative gait features. Specifically, we create a dynamic attention mechanism between the features of neighboring pixels that not only adaptively focuses on key regions but also generates more expressive local motion patterns. In addition, we develop a self-attention mechanism to select representative local motion patterns and further learn robust global motion patterns. Extensive experiments on three popular public gait datasets, i.e., CASIA-B, OUMVLP, and Gait3D, demonstrate that the proposed method can provide substantial improvements over the current state-of-the-art methods.

Cite

Text

Ma et al. "Dynamic Aggregated Network for Gait Recognition." Conference on Computer Vision and Pattern Recognition, 2023. doi:10.1109/CVPR52729.2023.02114

Markdown

[Ma et al. "Dynamic Aggregated Network for Gait Recognition." Conference on Computer Vision and Pattern Recognition, 2023.](https://mlanthology.org/cvpr/2023/ma2023cvpr-dynamic/) doi:10.1109/CVPR52729.2023.02114

BibTeX

@inproceedings{ma2023cvpr-dynamic,
  title     = {{Dynamic Aggregated Network for Gait Recognition}},
  author    = {Ma, Kang and Fu, Ying and Zheng, Dezhi and Cao, Chunshui and Hu, Xuecai and Huang, Yongzhen},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2023},
  pages     = {22076-22085},
  doi       = {10.1109/CVPR52729.2023.02114},
  url       = {https://mlanthology.org/cvpr/2023/ma2023cvpr-dynamic/}
}