A Generative Model of Worldwide Facial Appearance

Abstract

Human appearance depends on many proximate factors, including age, gender, ethnicity, and personal style choices. In this work, we model the relationship between human appearance and geographic location, which can impact these factors in complex ways. We propose GPS2Face, a dual-component generative network architecture that enables flexible facial generation with fine-grained control of latent factors. We use facial landmarks as a guide to synthesize likely faces for locations around in the world. We train our model on a large-scale dataset of geotagged faces and evaluate our proposed model, both qualitatively and quantitatively, against previous work.

Cite

Text

Bessinger and Jacobs. "A Generative Model of Worldwide Facial Appearance." IEEE/CVF Winter Conference on Applications of Computer Vision, 2019. doi:10.1109/WACV.2019.00172

Markdown

[Bessinger and Jacobs. "A Generative Model of Worldwide Facial Appearance." IEEE/CVF Winter Conference on Applications of Computer Vision, 2019.](https://mlanthology.org/wacv/2019/bessinger2019wacv-generative/) doi:10.1109/WACV.2019.00172

BibTeX

@inproceedings{bessinger2019wacv-generative,
  title     = {{A Generative Model of Worldwide Facial Appearance}},
  author    = {Bessinger, Zachary and Jacobs, Nathan},
  booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision},
  year      = {2019},
  pages     = {1569-1578},
  doi       = {10.1109/WACV.2019.00172},
  url       = {https://mlanthology.org/wacv/2019/bessinger2019wacv-generative/}
}