Joint Co-Speech Gesture and Expressive Talking Face Generation Using Diffusion with Adapters
Abstract
Recent advances in co-speech gesture and talking head generation have been impressive yet most methods focus on only one of the two tasks. Those that attempt to generate both often rely on separate models or network modules increasing training complexity and ignoring the inherent relationship between face and body movements. To address the challenges in this paper we propose a novel model architecture that jointly generates face and body motions within a single network. This approach leverages shared weights between modalities facilitated by adapters that enable adaptation to a common latent space. Our experiments demonstrate that the proposed framework not only maintains state-of-the-art co-speech gesture and talking head generation performance but also significantly reduces the number of parameters required.
Cite
Text
Hogue et al. "Joint Co-Speech Gesture and Expressive Talking Face Generation Using Diffusion with Adapters." Winter Conference on Applications of Computer Vision, 2025.Markdown
[Hogue et al. "Joint Co-Speech Gesture and Expressive Talking Face Generation Using Diffusion with Adapters." Winter Conference on Applications of Computer Vision, 2025.](https://mlanthology.org/wacv/2025/hogue2025wacv-joint/)BibTeX
@inproceedings{hogue2025wacv-joint,
title = {{Joint Co-Speech Gesture and Expressive Talking Face Generation Using Diffusion with Adapters}},
author = {Hogue, Steven and Zhang, Chenxu and Tian, Yapeng and Guo, Xiaohu},
booktitle = {Winter Conference on Applications of Computer Vision},
year = {2025},
pages = {4163-4172},
url = {https://mlanthology.org/wacv/2025/hogue2025wacv-joint/}
}