Hand Avatar: Free-Pose Hand Animation and Rendering from Monocular Video
Abstract
We present HandAvatar, a novel representation for hand animation and rendering, which can generate smoothly compositional geometry and self-occlusion-aware texture. Specifically, we first develop a MANO-HD model as a high-resolution mesh topology to fit personalized hand shapes. Sequentially, we decompose hand geometry into per-bone rigid parts, and then re-compose paired geometry encodings to derive an across-part consistent occupancy field. As for texture modeling, we propose a self-occlusion-aware shading field (SelF). In SelF, drivable anchors are paved on the MANO-HD surface to record albedo information under a wide variety of hand poses. Moreover, directed soft occupancy is designed to describe the ray-to-surface relation, which is leveraged to generate an illumination field for the disentanglement of pose-independent albedo and pose-dependent illumination. Trained from monocular video data, our HandAvatar can perform free-pose hand animation and rendering while at the same time achieving superior appearance fidelity. We also demonstrate that HandAvatar provides a route for hand appearance editing.
Cite
Text
Chen et al. "Hand Avatar: Free-Pose Hand Animation and Rendering from Monocular Video." Conference on Computer Vision and Pattern Recognition, 2023. doi:10.1109/CVPR52729.2023.00839Markdown
[Chen et al. "Hand Avatar: Free-Pose Hand Animation and Rendering from Monocular Video." Conference on Computer Vision and Pattern Recognition, 2023.](https://mlanthology.org/cvpr/2023/chen2023cvpr-hand/) doi:10.1109/CVPR52729.2023.00839BibTeX
@inproceedings{chen2023cvpr-hand,
title = {{Hand Avatar: Free-Pose Hand Animation and Rendering from Monocular Video}},
author = {Chen, Xingyu and Wang, Baoyuan and Shum, Heung-Yeung},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2023},
pages = {8683-8693},
doi = {10.1109/CVPR52729.2023.00839},
url = {https://mlanthology.org/cvpr/2023/chen2023cvpr-hand/}
}