Quality Dynamic Human Body Modeling Using a Single Low-Cost Depth Camera
Abstract
In this paper we present a novel autonomous pipeline to build a personalized parametric model (pose-driven avatar) using a single depth sensor. Our method first captures a few high-quality scans of the user rotating herself at multiple poses from different views. We fit each incomplete scan using template fitting techniques with a generic human template, and register all scans to every pose using global consistency constraints. After registration, these watertight models with different poses are used to train a parametric model in a fashion similar to the SCAPE method. Once the parametric model is built, it can be used as an animitable avatar or more interestingly synthesizing dynamic 3D models from single-view depth videos. Experimental results demonstrate the effectiveness of our system to produce dynamic models.
Cite
Text
Zhang et al. "Quality Dynamic Human Body Modeling Using a Single Low-Cost Depth Camera." Conference on Computer Vision and Pattern Recognition, 2014. doi:10.1109/CVPR.2014.92Markdown
[Zhang et al. "Quality Dynamic Human Body Modeling Using a Single Low-Cost Depth Camera." Conference on Computer Vision and Pattern Recognition, 2014.](https://mlanthology.org/cvpr/2014/zhang2014cvpr-quality/) doi:10.1109/CVPR.2014.92BibTeX
@inproceedings{zhang2014cvpr-quality,
title = {{Quality Dynamic Human Body Modeling Using a Single Low-Cost Depth Camera}},
author = {Zhang, Qing and Fu, Bo and Ye, Mao and Yang, Ruigang},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2014},
doi = {10.1109/CVPR.2014.92},
url = {https://mlanthology.org/cvpr/2014/zhang2014cvpr-quality/}
}