Audio-Driven Neural Gesture Reenactment with Video Motion Graphs

Abstract

Human speech is often accompanied by body gestures including arm and hand gestures. We present a method that reenacts a high-quality video with gestures matching a target speech audio. The key idea of our method is to split and re-assemble clips from a reference video through a novel video motion graph encoding valid transitions between clips. To seamlessly connect different clips in the reenactment, we propose a pose-aware video blending network which synthesizes video frames around the stitched frames between two clips. Moreover, we developed an audio-based gesture searching algorithm to find the optimal order of the reenacted frames. Our system generates reenactments that are consistent with both the audio rhythms and the speech content. We evaluate our synthesized video quality quantitatively, qualitatively, and with user studies, demonstrating that our method produces videos of much higher quality and consistency with the target audio compared to previous work and baselines.

Cite

Text

Zhou et al. "Audio-Driven Neural Gesture Reenactment with Video Motion Graphs." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.00341

Markdown

[Zhou et al. "Audio-Driven Neural Gesture Reenactment with Video Motion Graphs." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/zhou2022cvpr-audiodriven/) doi:10.1109/CVPR52688.2022.00341

BibTeX

@inproceedings{zhou2022cvpr-audiodriven,
  title     = {{Audio-Driven Neural Gesture Reenactment with Video Motion Graphs}},
  author    = {Zhou, Yang and Yang, Jimei and Li, Dingzeyu and Saito, Jun and Aneja, Deepali and Kalogerakis, Evangelos},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2022},
  pages     = {3418-3428},
  doi       = {10.1109/CVPR52688.2022.00341},
  url       = {https://mlanthology.org/cvpr/2022/zhou2022cvpr-audiodriven/}
}