DDGCN: A Dynamic Directed Graph Convolutional Network for Action Recognition
Abstract
We propose a Dynamic Directed Graph Convolutional Network (DDGCN) to model spatial and temporal features of human actions from their skeletal representations. The DDGCN consists of three new feature modeling modules: (1) Dynamic Convolutional Sampling (DCS), (2) Dynamic Convolutional Weight (DCW) assignment, and (3) Directed Graph Spatial-Temporal (DGST) feature extraction. Comprehensive experiments show that the DDGCN outperforms existing state-of-the-art action recognition approaches in various testing datasets. Our source code and model will be released at http://www.ece.lsu.edu/xinli/ActionModeling/index.html .
Cite
Text
Korban and Li. "DDGCN: A Dynamic Directed Graph Convolutional Network for Action Recognition." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58565-5_45Markdown
[Korban and Li. "DDGCN: A Dynamic Directed Graph Convolutional Network for Action Recognition." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/korban2020eccv-ddgcn/) doi:10.1007/978-3-030-58565-5_45BibTeX
@inproceedings{korban2020eccv-ddgcn,
title = {{DDGCN: A Dynamic Directed Graph Convolutional Network for Action Recognition}},
author = {Korban, Matthew and Li, Xin},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2020},
doi = {10.1007/978-3-030-58565-5_45},
url = {https://mlanthology.org/eccv/2020/korban2020eccv-ddgcn/}
}