Online Learning of Robust Facial Feature Trackers
Abstract
This paper presents a head pose and facial feature estimation technique that works over a wide range of pose variations without a priori knowledge of the appearance of the face. Using simple LK trackers, head pose is estimated by Levenberg-Marquardt (LM) pose estimation using the feature tracking as constraints. Factored sampling and RANSAC are employed to both provide a robust pose estimate and identify tracker drift by constraining outliers in the estimation process. The system provides both a head pose estimate and the position of facial features and is capable of tracking over a wide range of head poses.
Cite
Text
Sheerman-Chase et al. "Online Learning of Robust Facial Feature Trackers." IEEE/CVF International Conference on Computer Vision Workshops, 2009. doi:10.1109/ICCVW.2009.5457450Markdown
[Sheerman-Chase et al. "Online Learning of Robust Facial Feature Trackers." IEEE/CVF International Conference on Computer Vision Workshops, 2009.](https://mlanthology.org/iccvw/2009/sheermanchase2009iccvw-online/) doi:10.1109/ICCVW.2009.5457450BibTeX
@inproceedings{sheermanchase2009iccvw-online,
title = {{Online Learning of Robust Facial Feature Trackers}},
author = {Sheerman-Chase, Tim and Ong, Eng-Jon and Bowden, Richard},
booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
year = {2009},
pages = {1386-1392},
doi = {10.1109/ICCVW.2009.5457450},
url = {https://mlanthology.org/iccvw/2009/sheermanchase2009iccvw-online/}
}