Real-Time 3-D Face Tracking and Modeling from Awebcam

Abstract

We first infer a 3-D face model from a single frontal image using automatically extracted 2-D landmarks and deforming a generic 3-D model. Then, for any input image, we extract feature points and track them in 2-D. Given these correspondences, sometimes noisy and incorrect, we robustly estimate the 3-D head pose using PnP and a RANSAC process. As the head moves, we dynamically add new feature points to handle a large range of poses. When the tracker gets lost, due to motion blur or occlusions, the system re-initializes by matching feature points to the reference frontal image feature points. Our system runs in real-time (>;15Hz) on a standard CPU with a GPU card. We present results on stored video and will present a live demo, showing excellent tracking under large motion, fast movement, occlusion and facial expression variations. We also show comparative results with the ground truth BU head tracking dataset.

Cite

Text

Choi et al. "Real-Time 3-D Face Tracking and Modeling from Awebcam." IEEE/CVF Winter Conference on Applications of Computer Vision, 2012. doi:10.1109/WACV.2012.6163031

Markdown

[Choi et al. "Real-Time 3-D Face Tracking and Modeling from Awebcam." IEEE/CVF Winter Conference on Applications of Computer Vision, 2012.](https://mlanthology.org/wacv/2012/choi2012wacv-real/) doi:10.1109/WACV.2012.6163031

BibTeX

@inproceedings{choi2012wacv-real,
  title     = {{Real-Time 3-D Face Tracking and Modeling from Awebcam}},
  author    = {Choi, Jongmoo and Dumortier, Yann and Choi, Sang-Il and Ahmad, Muhammad Bilal and Medioni, Gérard G.},
  booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision},
  year      = {2012},
  pages     = {33-40},
  doi       = {10.1109/WACV.2012.6163031},
  url       = {https://mlanthology.org/wacv/2012/choi2012wacv-real/}
}