Surface-Aligned Neural Radiance Fields for Controllable 3D Human Synthesis

Abstract

We propose a new method for reconstructing controllable implicit 3D human models from sparse multi-view RGB videos. Our method defines the neural scene representation on the mesh surface points and signed distances from the surface of a human body mesh. We identify an indistinguishability issue that arises when a point in 3D space is mapped to its nearest surface point on a mesh for learning surface-aligned neural scene representation. To address this issue, we propose projecting a point onto a mesh surface using a barycentric interpolation with modified vertex normals. Experiments with the ZJU-MoCap and Human3.6M datasets show that our approach achieves a higher quality in a novel-view and novel-pose synthesis than existing methods. We also demonstrate that our method easily supports the control of body shape and clothes. Project page: https://pfnet-research.github.io/surface-aligned-nerf/.

Cite

Text

Xu et al. "Surface-Aligned Neural Radiance Fields for Controllable 3D Human Synthesis." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.01542

Markdown

[Xu et al. "Surface-Aligned Neural Radiance Fields for Controllable 3D Human Synthesis." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/xu2022cvpr-surfacealigned/) doi:10.1109/CVPR52688.2022.01542

BibTeX

@inproceedings{xu2022cvpr-surfacealigned,
  title     = {{Surface-Aligned Neural Radiance Fields for Controllable 3D Human Synthesis}},
  author    = {Xu, Tianhan and Fujita, Yasuhiro and Matsumoto, Eiichi},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2022},
  pages     = {15883-15892},
  doi       = {10.1109/CVPR52688.2022.01542},
  url       = {https://mlanthology.org/cvpr/2022/xu2022cvpr-surfacealigned/}
}