Collaborative Facial Landmark Localization for Transferring Annotations Across Datasets
Abstract
In this paper we make the first effort, to the best of our knowledge, to combine multiple face landmark datasets with different landmark definitions into a super dataset, with a union of all landmark types computed in each image as output. Our approach is flexible, and our system can optionally use known landmarks in the target dataset to constrain the localization. Our novel pipeline is built upon variants of state-of-the-art facial landmark localization methods. Specifically, we propose to label images in the target dataset jointly rather than independently and exploit exemplars from both the source datasets and the target dataset. This approach integrates nonparametric appearance and shape modeling and graph matching together to achieve our goal.
Cite
Text
Smith and Zhang. "Collaborative Facial Landmark Localization for Transferring Annotations Across Datasets." European Conference on Computer Vision, 2014. doi:10.1007/978-3-319-10599-4_6Markdown
[Smith and Zhang. "Collaborative Facial Landmark Localization for Transferring Annotations Across Datasets." European Conference on Computer Vision, 2014.](https://mlanthology.org/eccv/2014/smith2014eccv-collaborative/) doi:10.1007/978-3-319-10599-4_6BibTeX
@inproceedings{smith2014eccv-collaborative,
title = {{Collaborative Facial Landmark Localization for Transferring Annotations Across Datasets}},
author = {Smith, Brandon M. and Zhang, Li},
booktitle = {European Conference on Computer Vision},
year = {2014},
pages = {78-93},
doi = {10.1007/978-3-319-10599-4_6},
url = {https://mlanthology.org/eccv/2014/smith2014eccv-collaborative/}
}