From Regular Images to Animated Heads: A Least Squares Approach

Abstract

We show that we can effectively fit arbitrarily complex animation models to noisy image data. Our approach is based on leastsquares adjustment using of a set of progressively finer control triangulations and takes advantage of three complementary sources of information: stereo data, silhouette edges and 2-D feature points. In this way, complete head models—including ears and hair—can be acquired with a cheap and entirely passive sensor, such as an ordinary video camera. They can then be fed to existing animation software to produce synthetic sequences.

Cite

Text

Fua and Miccio. "From Regular Images to Animated Heads: A Least Squares Approach." European Conference on Computer Vision, 1998. doi:10.1007/BFB0055667

Markdown

[Fua and Miccio. "From Regular Images to Animated Heads: A Least Squares Approach." European Conference on Computer Vision, 1998.](https://mlanthology.org/eccv/1998/fua1998eccv-regular/) doi:10.1007/BFB0055667

BibTeX

@inproceedings{fua1998eccv-regular,
  title     = {{From Regular Images to Animated Heads: A Least Squares Approach}},
  author    = {Fua, Pascal and Miccio, C.},
  booktitle = {European Conference on Computer Vision},
  year      = {1998},
  pages     = {188-202},
  doi       = {10.1007/BFB0055667},
  url       = {https://mlanthology.org/eccv/1998/fua1998eccv-regular/}
}