Modeling and Synthesis of Facial Motion Driven by Speech
Abstract
We introduce a novel approach to modeling the dynamics of human facial motion induced by the action of speech for the purpose of synthesis. We represent the trajectories of a number of salient features on the human face as the output of a dynamical system made up of two subsystems, one driven by the deterministic speech input, and a second driven by an unknown stochastic input. Inference of the model (learning) is performed automatically and involves an extension of independent component analysis to time-depentend data. Using a shape-texture decompositional representation for the face, we generate facial image sequences reconstructed from synthesized feature point positions.
Cite
Text
Saisan et al. "Modeling and Synthesis of Facial Motion Driven by Speech." European Conference on Computer Vision, 2004. doi:10.1007/978-3-540-24672-5_36Markdown
[Saisan et al. "Modeling and Synthesis of Facial Motion Driven by Speech." European Conference on Computer Vision, 2004.](https://mlanthology.org/eccv/2004/saisan2004eccv-modeling/) doi:10.1007/978-3-540-24672-5_36BibTeX
@inproceedings{saisan2004eccv-modeling,
title = {{Modeling and Synthesis of Facial Motion Driven by Speech}},
author = {Saisan, Payam and Bissacco, Alessandro and Chiuso, Alessandro and Soatto, Stefano},
booktitle = {European Conference on Computer Vision},
year = {2004},
pages = {456-467},
doi = {10.1007/978-3-540-24672-5_36},
url = {https://mlanthology.org/eccv/2004/saisan2004eccv-modeling/}
}