TexMesh: Reconstructing Detailed Human Texture and Geometry from RGB-D Video
Abstract
We present TexMesh, a novel approach to reconstruct detailed human meshes with high-resolution full-body texture from RGB-D video. TexMesh enables high quality free-viewpoint rendering of humans. Given the RGB frames, the captured environment map, and the coarse per-frame human mesh from RGB-D tracking, our method reconstructs spatiotemporally consistent and detailed per-frame meshes along with a high-resolution albedo texture. By using the incident illumination we are able to accurately estimate local surface geometry and albedo, which allows us to further use photometric constraints to adapt a synthetically trained model to real-world sequences in a self-supervised manner for detailed surface geometry and high-resolution texture estimation. In practice, we train our models on a short example sequence for self-adaptation and the model runs at interactive framerate afterwards. We validate TexMesh on synthetic and real-world data, and show it outperforms the state of art quantitatively and qualitatively.
Cite
Text
Zhi et al. "TexMesh: Reconstructing Detailed Human Texture and Geometry from RGB-D Video." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58607-2_29Markdown
[Zhi et al. "TexMesh: Reconstructing Detailed Human Texture and Geometry from RGB-D Video." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/zhi2020eccv-texmesh/) doi:10.1007/978-3-030-58607-2_29BibTeX
@inproceedings{zhi2020eccv-texmesh,
title = {{TexMesh: Reconstructing Detailed Human Texture and Geometry from RGB-D Video}},
author = {Zhi, Tiancheng and Lassner, Christoph and Tung, Tony and Stoll, Carsten and Narasimhan, Srinivasa G. and Vo, Minh},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2020},
doi = {10.1007/978-3-030-58607-2_29},
url = {https://mlanthology.org/eccv/2020/zhi2020eccv-texmesh/}
}