Learning Implicit Templates for Point-Based Clothed Human Modeling

Abstract

We present FITE, a First-Implicit-Then-Explicit framework for modeling human avatars in clothing. Our framework first learns implicit surface templates representing the coarse clothing topology, and then employs the templates to guide the generation of point sets which further capture pose-dependent clothing deformations such as wrinkles. Our pipeline incorporates the merits of both implicit and explicit representations, namely, the ability to handle varying topology and the ability to efficiently capture fine details. We also propose diffused skinning to facilitate template training especially for loose clothing, and projection-based pose-encoding to extract pose information from mesh templates without predefined UV map or connectivity. Our code is publicly available at https://github.com/jsnln/fite.

Cite

Text

Lin et al. "Learning Implicit Templates for Point-Based Clothed Human Modeling." Proceedings of the European Conference on Computer Vision (ECCV), 2022. doi:10.1007/978-3-031-20062-5_13

Markdown

[Lin et al. "Learning Implicit Templates for Point-Based Clothed Human Modeling." Proceedings of the European Conference on Computer Vision (ECCV), 2022.](https://mlanthology.org/eccv/2022/lin2022eccv-learning/) doi:10.1007/978-3-031-20062-5_13

BibTeX

@inproceedings{lin2022eccv-learning,
  title     = {{Learning Implicit Templates for Point-Based Clothed Human Modeling}},
  author    = {Lin, Siyou and Zhang, Hongwen and Zheng, Zerong and Shao, Ruizhi and Liu, Yebin},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2022},
  doi       = {10.1007/978-3-031-20062-5_13},
  url       = {https://mlanthology.org/eccv/2022/lin2022eccv-learning/}
}