Expressive Talking Human from Single-Image with Imperfect Priors

Abstract

Building realistic and animatable avatars still requires minutes of multi-view or monocular self-rotating videos, and most methods lack precise control over gestures and expressions. To push this boundary, we address the challenge of constructing a whole-body talking avatar from a single image. We propose a novel pipeline that tackles two critical issues: 1) complex dynamic modeling and 2) generalization to novel gestures and expressions. To achieve seamless generalization, we leverage recent pose-guided image-to-video diffusion models to generate imperfect video frames as pseudo-labels. To overcome the dynamic modeling challenge posed by inconsistent and noisy pseudo-frames, we introduce a tightly coupled 3DGS-mesh hybrid avatar representation and apply several key regularizations to mitigate inconsistencies caused by imperfect labels. Extensive experiments on diverse subjects demonstrate that our method enables the creation of a photorealistic, precisely animatable, and expressive whole-body talking avatar from just a single image.

Cite

Text

Xiang et al. "Expressive Talking Human from Single-Image with Imperfect Priors." International Conference on Computer Vision, 2025.

Markdown

[Xiang et al. "Expressive Talking Human from Single-Image with Imperfect Priors." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/xiang2025iccv-expressive/)

BibTeX

@inproceedings{xiang2025iccv-expressive,
  title     = {{Expressive Talking Human from Single-Image with Imperfect Priors}},
  author    = {Xiang, Jun and Guo, Yudong and Hu, Leipeng and Guo, Boyang and Yuan, Yancheng and Zhang, Juyong},
  booktitle = {International Conference on Computer Vision},
  year      = {2025},
  pages     = {10398-10409},
  url       = {https://mlanthology.org/iccv/2025/xiang2025iccv-expressive/}
}