High-Fidelity Clothed Avatar Reconstruction from a Single Image
Abstract
This paper presents a framework for efficient 3D clothed avatar reconstruction. By combining the advantages of the high accuracy of optimization-based methods and the efficiency of learning-based methods, we propose a coarse-to-fine way to realize a high-fidelity clothed avatar reconstruction (CAR) from a single image. At the first stage, we use an implicit model to learn the general shape in the canonical space of a person in a learning-based way, and at the second stage, we refine the surface detail by estimating the non-rigid deformation in the posed space in an optimization way. A hyper-network is utilized to generate a good initialization so that the convergence of the optimization process is greatly accelerated. Extensive experiments on various datasets show that the proposed CAR successfully produces high-fidelity avatars for arbitrarily clothed humans in real scenes. The codes will be released.
Cite
Text
Liao et al. "High-Fidelity Clothed Avatar Reconstruction from a Single Image." Conference on Computer Vision and Pattern Recognition, 2023. doi:10.1109/CVPR52729.2023.00837Markdown
[Liao et al. "High-Fidelity Clothed Avatar Reconstruction from a Single Image." Conference on Computer Vision and Pattern Recognition, 2023.](https://mlanthology.org/cvpr/2023/liao2023cvpr-highfidelity/) doi:10.1109/CVPR52729.2023.00837BibTeX
@inproceedings{liao2023cvpr-highfidelity,
title = {{High-Fidelity Clothed Avatar Reconstruction from a Single Image}},
author = {Liao, Tingting and Zhang, Xiaomei and Xiu, Yuliang and Yi, Hongwei and Liu, Xudong and Qi, Guo-Jun and Zhang, Yong and Wang, Xuan and Zhu, Xiangyu and Lei, Zhen},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2023},
pages = {8662-8672},
doi = {10.1109/CVPR52729.2023.00837},
url = {https://mlanthology.org/cvpr/2023/liao2023cvpr-highfidelity/}
}