HumanCrafter: Synergizing Generalizable Human Reconstruction and Semantic 3D Segmentation
Abstract
Recent advances in generative models have achieved high-fidelity in 3D human reconstruction, yet their utility for specific tasks (e.g., human 3D segmentation) remains constrained. We propose HumanCrafter, a unified framework that enables the joint modeling of appearance and human-part semantics from a single image in a feed-forward manner. Specifically, we integrate human geometric priors in the reconstruction stage and self-supervised semantic priors in the segmentation stage. To address labeled 3D human datasets scarcity, we further develop an interactive annotation procedure for generating high-quality data-label pairs. Our pixel-aligned aggregation enables cross-task synergy, while the multi-task objective simultaneously optimizes texture modeling fidelity and semantic consistency. Extensive experiments demonstrate that HumanCrafter surpasses existing state-of-the-art methods in both 3D human-part segmentation and 3D human reconstruction **from a single image**.
Cite
Text
Pan et al. "HumanCrafter: Synergizing Generalizable Human Reconstruction and Semantic 3D Segmentation." Advances in Neural Information Processing Systems, 2025.Markdown
[Pan et al. "HumanCrafter: Synergizing Generalizable Human Reconstruction and Semantic 3D Segmentation." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/pan2025neurips-humancrafter/)BibTeX
@inproceedings{pan2025neurips-humancrafter,
title = {{HumanCrafter: Synergizing Generalizable Human Reconstruction and Semantic 3D Segmentation}},
author = {Pan, Panwang and Shen, Tingting and Li, Chenxin and Lin, Yunlong and Wen, Kairun and Zhao, Jingjing and Yuan, Yixuan},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/pan2025neurips-humancrafter/}
}