3D Clothed Human Reconstruction from Sparse Multi-View Images

Abstract

Clothed human reconstruction based on implicit functions has recently received considerable attention. In this study, we explore the most effective 2D feature fusion method from multi-view inputs experimentally and propose a method utilizing the 3D coarse volume predicted by the network to provide a better 3D prior. We fuse 2D features using an attention-based method to obtain detailed geometric predictions. In addition, we propose depth and color projection networks that predict the coarse depth volume and the coarse color volume from the input RGB images and depth maps, respectively. Coarse depth volume and coarse color volume are used as 3D priors to predict occupancy and texture, respectively. Further, we combine the fused 2D features and 3D features extracted from our 3D prior to predict occupancy and propose a technique to adjust the influence of 2D and 3D features using learnable weights. The effectiveness of our method is demonstrated through qualitative and quantitative comparisons with recent multi-view clothed human reconstruction models.

Cite

Text

Hong et al. "3D Clothed Human Reconstruction from Sparse Multi-View Images." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2024. doi:10.1109/CVPRW63382.2024.00072

Markdown

[Hong et al. "3D Clothed Human Reconstruction from Sparse Multi-View Images." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2024.](https://mlanthology.org/cvprw/2024/hong2024cvprw-3d/) doi:10.1109/CVPRW63382.2024.00072

BibTeX

@inproceedings{hong2024cvprw-3d,
  title     = {{3D Clothed Human Reconstruction from Sparse Multi-View Images}},
  author    = {Hong, Jin Gyu and Noh, Seung Young and Lee, Hee Kyung and Cheong, Won-Sik and Chang, Ju Yong},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2024},
  pages     = {677-687},
  doi       = {10.1109/CVPRW63382.2024.00072},
  url       = {https://mlanthology.org/cvprw/2024/hong2024cvprw-3d/}
}