Registering Explicit to Implicit: Towards High-Fidelity Garment Mesh Reconstruction from Single Images
Abstract
Fueled by the power of deep learning techniques and implicit shape learning, recent advances in single-image human digitalization have reached unprecedented accuracy and could recover fine-grained surface details such as garment wrinkles. However, a common problem for the implicit-based methods is that they cannot produce separated and topology-consistent mesh for each garment piece, which is crucial for the current 3D content creation pipeline. To address this issue, we proposed a novel geometry inference framework ReEF that reconstructs topology- consistent layered garment mesh by registering the explicit garment template to the whole-body implicit fields predicted from single images. Experiments demonstrate that our method notably outperforms the counterparts on single-image layered garment reconstruction and could bring high-quality digital assets for further content creation.
Cite
Text
Zhu et al. "Registering Explicit to Implicit: Towards High-Fidelity Garment Mesh Reconstruction from Single Images." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.00382Markdown
[Zhu et al. "Registering Explicit to Implicit: Towards High-Fidelity Garment Mesh Reconstruction from Single Images." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/zhu2022cvpr-registering/) doi:10.1109/CVPR52688.2022.00382BibTeX
@inproceedings{zhu2022cvpr-registering,
title = {{Registering Explicit to Implicit: Towards High-Fidelity Garment Mesh Reconstruction from Single Images}},
author = {Zhu, Heming and Qiu, Lingteng and Qiu, Yuda and Han, Xiaoguang},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2022},
pages = {3845-3854},
doi = {10.1109/CVPR52688.2022.00382},
url = {https://mlanthology.org/cvpr/2022/zhu2022cvpr-registering/}
}