Nonparametric Context Modeling of Local Appearance for Pose- and Expression-Robust Facial Landmark Localization

Abstract

We propose a data-driven approach to facial landmark localization that models the correlations between each landmark and its surrounding appearance features. At runtime, each feature casts a weighted vote to predict landmark locations, where the weight is precomputed to take into account the feature's discriminative power. The feature votingbased landmark detection is more robust than previous local appearance-based detectors; we combine it with nonparametric shape regularization to build a novel facial landmark localization pipeline that is robust to scale, in-plane rotation, occlusion, expression, and most importantly, extreme head pose. We achieve state-of-the-art performance on two especially challenging in-the-wild datasets populated by faces with extreme head pose and expression.

Cite

Text

Smith et al. "Nonparametric Context Modeling of Local Appearance for Pose- and Expression-Robust Facial Landmark Localization." Conference on Computer Vision and Pattern Recognition, 2014. doi:10.1109/CVPR.2014.225

Markdown

[Smith et al. "Nonparametric Context Modeling of Local Appearance for Pose- and Expression-Robust Facial Landmark Localization." Conference on Computer Vision and Pattern Recognition, 2014.](https://mlanthology.org/cvpr/2014/smith2014cvpr-nonparametric/) doi:10.1109/CVPR.2014.225

BibTeX

@inproceedings{smith2014cvpr-nonparametric,
  title     = {{Nonparametric Context Modeling of Local Appearance for Pose- and Expression-Robust Facial Landmark Localization}},
  author    = {Smith, Brandon M. and Brandt, Jonathan and Lin, Zhe and Zhang, Li},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2014},
  doi       = {10.1109/CVPR.2014.225},
  url       = {https://mlanthology.org/cvpr/2014/smith2014cvpr-nonparametric/}
}