A Data-Driven Approach for Facial Expression Synthesis in Video

Abstract

This paper presents a method to synthesize a realistic facial animation of a target person, driven by a facial performance video of another person. Different from traditional facial animation approaches, our system takes advantage of an existing facial performance database of the target person, and generates the final video by retrieving frames from the database that have similar expressions to the input ones. To achieve this we develop an expression similarity metric for accurately measuring the expression difference between two video frames. To enforce temporal coherence, our system employs a shortest path algorithm to choose the optimal image for each frame from a set of candidate frames determined by the similarity metric. Finally, our system adopts an expression mapping method to further minimize the expression difference between the input and retrieved frames. Experimental results show that our system can generate high quality facial animation using the proposed data-driven approach.

Cite

Text

Li et al. "A Data-Driven Approach for Facial Expression Synthesis in Video." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2012. doi:10.1109/CVPR.2012.6247658

Markdown

[Li et al. "A Data-Driven Approach for Facial Expression Synthesis in Video." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2012.](https://mlanthology.org/cvpr/2012/li2012cvpr-data/) doi:10.1109/CVPR.2012.6247658

BibTeX

@inproceedings{li2012cvpr-data,
  title     = {{A Data-Driven Approach for Facial Expression Synthesis in Video}},
  author    = {Li, Kai and Xu, Feng and Wang, Jue and Dai, Qionghai and Liu, Yebin},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year      = {2012},
  pages     = {57-64},
  doi       = {10.1109/CVPR.2012.6247658},
  url       = {https://mlanthology.org/cvpr/2012/li2012cvpr-data/}
}