TeRA: Rethinking Text-Guided Realistic 3D Avatar Generation

Abstract

Efficient 3D avatar creation is a significant demand in the metaverse, film/game, AR/VR, etc. In this paper, we rethink text-to-avatar generative models by proposing TeRA, a more efficient and effective framework than the previous SDS-based models and general large 3D generative models. Our approach employs a two-stage training strategy for learning a native 3D avatar generative model. Initially, we distill a decoder to derive a structured latent space from a large human reconstruction model. Subsequently, a text-controlled latent diffusion model is trained to generate photorealistic 3D human avatars within this latent space. TeRA enhances the model performance by eliminating slow iterative optimization and enables text-based partial customization through a structured 3D human representation. Experiments have proven our approach's superiority over previous text-to-avatar generative models in subjective and objective evaluation.

Cite

Text

Wang et al. "TeRA: Rethinking Text-Guided Realistic 3D Avatar Generation." International Conference on Computer Vision, 2025.

Markdown

[Wang et al. "TeRA: Rethinking Text-Guided Realistic 3D Avatar Generation." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/wang2025iccv-tera/)

BibTeX

@inproceedings{wang2025iccv-tera,
  title     = {{TeRA: Rethinking Text-Guided Realistic 3D Avatar Generation}},
  author    = {Wang, Yanwen and Zhuang, Yiyu and Zhang, Jiawei and Wang, Li and Zeng, Yifei and Cao, Xun and Zuo, Xinxin and Zhu, Hao},
  booktitle = {International Conference on Computer Vision},
  year      = {2025},
  pages     = {10686-10697},
  url       = {https://mlanthology.org/iccv/2025/wang2025iccv-tera/}
}