Towards Accurate Alignment in Real-Time 3D Hand-Mesh Reconstruction
Abstract
3D hand-mesh reconstruction from RGB images facilitates many applications, including augmented reality (AR). However, this requires not only real-time speed and accurate hand pose and shape but also plausible mesh-image alignment. While existing works already achieve promising results, meeting all three requirements is very challenging. This paper presents a novel pipeline by decoupling the hand-mesh reconstruction task into three stages: a joint stage to predict hand joints and segmentation; a mesh stage to predict a rough hand mesh; and a refine stage to fine-tune it with an offset mesh for mesh-image alignment. With careful design in the network structure and in the loss functions, we can promote high-quality finger-level mesh-image alignment and drive the models together to deliver real-time predictions. Extensive quantitative and qualitative results on benchmark datasets demonstrate that the quality of our results outperforms the state-of-the-art methods on hand-mesh/pose precision and hand-image alignment. In the end, we also showcase several real-time AR scenarios.
Cite
Text
Tang et al. "Towards Accurate Alignment in Real-Time 3D Hand-Mesh Reconstruction." International Conference on Computer Vision, 2021. doi:10.1109/ICCV48922.2021.01149Markdown
[Tang et al. "Towards Accurate Alignment in Real-Time 3D Hand-Mesh Reconstruction." International Conference on Computer Vision, 2021.](https://mlanthology.org/iccv/2021/tang2021iccv-accurate/) doi:10.1109/ICCV48922.2021.01149BibTeX
@inproceedings{tang2021iccv-accurate,
title = {{Towards Accurate Alignment in Real-Time 3D Hand-Mesh Reconstruction}},
author = {Tang, Xiao and Wang, Tianyu and Fu, Chi-Wing},
booktitle = {International Conference on Computer Vision},
year = {2021},
pages = {11698-11707},
doi = {10.1109/ICCV48922.2021.01149},
url = {https://mlanthology.org/iccv/2021/tang2021iccv-accurate/}
}