Multi-View Appearance-Based 3D Hand Pose Estimation
Abstract
We describe a novel approach to appearance-based hand pose estimation which relies on multiple cameras to improve accuracy and resolve ambiguities caused by selfocclusions. Rather than estimating 3D geometry as most previous multi-view imaging systems, our approach uses multiple views to extend current exemplar-based methods that estimate hand pose by matching a probe image with a large discrete set of labeled hand pose images. We formulate the problem in a MAP (maximum a posteriori) framework, where the information from multiple cameras is fused to provide reliable hand pose estimation. Our quantitative experimental results show that correct estimation rate is much higher using our multi-view approach than using a single-view approach.
Cite
Text
Guan et al. "Multi-View Appearance-Based 3D Hand Pose Estimation." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2006. doi:10.1109/CVPRW.2006.137Markdown
[Guan et al. "Multi-View Appearance-Based 3D Hand Pose Estimation." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2006.](https://mlanthology.org/cvprw/2006/guan2006cvprw-multiview/) doi:10.1109/CVPRW.2006.137BibTeX
@inproceedings{guan2006cvprw-multiview,
title = {{Multi-View Appearance-Based 3D Hand Pose Estimation}},
author = {Guan, Haiying and Chang, Jae Sik and Chen, Longbin and Feris, Rogério Schmidt and Turk, Matthew},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2006},
pages = {154},
doi = {10.1109/CVPRW.2006.137},
url = {https://mlanthology.org/cvprw/2006/guan2006cvprw-multiview/}
}