NECA: Neural Customizable Human Avatar

Abstract

Human avatar has become a novel type of 3D asset with various applications. Ideally a human avatar should be fully customizable to accommodate different settings and environments. In this work we introduce NECA an approach capable of learning versatile human representation from monocular or sparse-view videos enabling granular customization across aspects such as pose shadow shape lighting and texture. The core of our approach is to represent humans in complementary dual spaces and predict disentangled neural fields of geometry albedo shadow as well as an external lighting from which we are able to derive realistic rendering with high-frequency details via volumetric rendering. Extensive experiments demonstrate the advantage of our method over the state-of-the-art methods in photorealistic rendering as well as various editing tasks such as novel pose synthesis and relighting. Our code is available at https://github.com/iSEE-Laboratory/NECA.

Cite

Text

Xiao et al. "NECA: Neural Customizable Human Avatar." Conference on Computer Vision and Pattern Recognition, 2024. doi:10.1109/CVPR52733.2024.01899

Markdown

[Xiao et al. "NECA: Neural Customizable Human Avatar." Conference on Computer Vision and Pattern Recognition, 2024.](https://mlanthology.org/cvpr/2024/xiao2024cvpr-neca/) doi:10.1109/CVPR52733.2024.01899

BibTeX

@inproceedings{xiao2024cvpr-neca,
  title     = {{NECA: Neural Customizable Human Avatar}},
  author    = {Xiao, Junjin and Zhang, Qing and Xu, Zhan and Zheng, Wei-Shi},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2024},
  pages     = {20091-20101},
  doi       = {10.1109/CVPR52733.2024.01899},
  url       = {https://mlanthology.org/cvpr/2024/xiao2024cvpr-neca/}
}