Learning to Estimate 3D Hand Pose from Single RGB Images

Abstract

Low-cost consumer depth cameras and deep learning have enabled reasonable 3D hand pose estimation from single depth images. In this paper, we present an approach that estimates 3D hand pose from regular RGB images. This task has far more ambiguities due to the missing depth information. To this end, we propose a deep network that learns a network-implicit 3D articulation prior. Together with detected keypoints in the images, this network yields good estimates of the 3D pose. We introduce a large scale 3D hand pose dataset based on synthetic hand models for training the involved networks. Experiments on a variety of test sets, including one on sign language recognition, demonstrate the feasibility of 3D hand pose estimation on single color images.

Cite

Text

Zimmermann and Brox. "Learning to Estimate 3D Hand Pose from Single RGB Images." International Conference on Computer Vision, 2017. doi:10.1109/ICCV.2017.525

Markdown

[Zimmermann and Brox. "Learning to Estimate 3D Hand Pose from Single RGB Images." International Conference on Computer Vision, 2017.](https://mlanthology.org/iccv/2017/zimmermann2017iccv-learning-a/) doi:10.1109/ICCV.2017.525

BibTeX

@inproceedings{zimmermann2017iccv-learning-a,
  title     = {{Learning to Estimate 3D Hand Pose from Single RGB Images}},
  author    = {Zimmermann, Christian and Brox, Thomas},
  booktitle = {International Conference on Computer Vision},
  year      = {2017},
  doi       = {10.1109/ICCV.2017.525},
  url       = {https://mlanthology.org/iccv/2017/zimmermann2017iccv-learning-a/}
}