Multi-View Face Image Synthesis Using Factorization Model

Abstract

We present a sample-based method for synthesizing face images in a wide range of view. Here the ”human identity” and ”head pose” are regarded as two influence factors of face appearance and a factorization model is used to learn their interaction with a face database. Our method extends original bilinear factorization model to nonlinear case so that global optimum solution can be found in solving ”translation” task. Thus, some view of a new person’s face image is able to be ”translated” into other views. Experimental results show that the synthesized faces are quite similar to the ground-truth. The proposed method can be applied to a broad area of human computer interaction, such as face recognition across view or face synthesis in virtual reality.

Cite

Text

Du and Lin. "Multi-View Face Image Synthesis Using Factorization Model." European Conference on Computer Vision, 2004. doi:10.1007/978-3-540-24837-8_19

Markdown

[Du and Lin. "Multi-View Face Image Synthesis Using Factorization Model." European Conference on Computer Vision, 2004.](https://mlanthology.org/eccv/2004/du2004eccv-multi/) doi:10.1007/978-3-540-24837-8_19

BibTeX

@inproceedings{du2004eccv-multi,
  title     = {{Multi-View Face Image Synthesis Using Factorization Model}},
  author    = {Du, Yangzhou and Lin, Xueyin},
  booktitle = {European Conference on Computer Vision},
  year      = {2004},
  pages     = {200-210},
  doi       = {10.1007/978-3-540-24837-8_19},
  url       = {https://mlanthology.org/eccv/2004/du2004eccv-multi/}
}