Learning Skeletal Graph Neural Networks for Hard 3D Pose Estimation
Abstract
Various deep learning techniques have been proposed to solve the single-view 2D-to-3D pose estimation problem. While the average prediction accuracy has been improved significantly over the years, the performance on hard poses with depth ambiguity, self-occlusion, and complex or rare poses is still far from satisfactory. In this work, we target these hard poses and present a novel skeletal GNN learning solution. To be specific, we propose a hop-aware hierarchical channel-squeezing fusion layer to effectively extract relevant information from neighboring nodes while suppressing undesired noises in GNN learning. In addition, we propose a temporal-aware dynamic graph construction procedure that is robust and effective for 3D pose estimation. Experimental results on the Human3.6M dataset show that our solution achieves a 10.3% average prediction accuracy improvement and greatly improves on hard poses over state-of-the-art techniques. We further apply the proposed technique on the skeleton-based action recognition task and also achieve state-of-the-art performance.
Cite
Text
Zeng et al. "Learning Skeletal Graph Neural Networks for Hard 3D Pose Estimation." International Conference on Computer Vision, 2021. doi:10.1109/ICCV48922.2021.01124Markdown
[Zeng et al. "Learning Skeletal Graph Neural Networks for Hard 3D Pose Estimation." International Conference on Computer Vision, 2021.](https://mlanthology.org/iccv/2021/zeng2021iccv-learning/) doi:10.1109/ICCV48922.2021.01124BibTeX
@inproceedings{zeng2021iccv-learning,
title = {{Learning Skeletal Graph Neural Networks for Hard 3D Pose Estimation}},
author = {Zeng, Ailing and Sun, Xiao and Yang, Lei and Zhao, Nanxuan and Liu, Minhao and Xu, Qiang},
booktitle = {International Conference on Computer Vision},
year = {2021},
pages = {11436-11445},
doi = {10.1109/ICCV48922.2021.01124},
url = {https://mlanthology.org/iccv/2021/zeng2021iccv-learning/}
}