3D-Aided Deep Pose-Invariant Face Recognition
Abstract
Learning from synthetic faces, though perhaps appealing for high data efficiency, may not bring satisfactory performance due to the distribution discrepancy of the synthetic and real face images. To mitigate this gap, we propose a 3D-Aided Deep Pose-Invariant Face Recognition Model (3D-PIM), which automatically recovers realistic frontal faces from arbitrary poses through a 3D face model in a novel way. Specifically, 3D-PIM incorporates a simulator with the aid of a 3D Morphable Model (3D MM) to obtain shape and appearance prior for accelerating face normalization learning, requiring less training data. It further leverages a global-local Generative Adversarial Network (GAN) with multiple critical improvements as a refiner to enhance the realism of both global structures and local details of the face simulator’s output using unlabelled real data only, while preserving the identity information. Qualitative and quantitative experiments on both controlled and in-the-wild benchmarks clearly demonstrate superiority of the proposed model over state-of-the-arts.
Cite
Text
Zhao et al. "3D-Aided Deep Pose-Invariant Face Recognition." International Joint Conference on Artificial Intelligence, 2018. doi:10.24963/IJCAI.2018/165Markdown
[Zhao et al. "3D-Aided Deep Pose-Invariant Face Recognition." International Joint Conference on Artificial Intelligence, 2018.](https://mlanthology.org/ijcai/2018/zhao2018ijcai-d/) doi:10.24963/IJCAI.2018/165BibTeX
@inproceedings{zhao2018ijcai-d,
title = {{3D-Aided Deep Pose-Invariant Face Recognition}},
author = {Zhao, Jian and Xiong, Lin and Cheng, Yu and Cheng, Yi and Li, Jianshu and Zhou, Li and Xu, Yan and Karlekar, Jayashree and Pranata, Sugiri and Shen, Shengmei and Xing, Junliang and Yan, Shuicheng and Feng, Jiashi},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2018},
pages = {1184-1190},
doi = {10.24963/IJCAI.2018/165},
url = {https://mlanthology.org/ijcai/2018/zhao2018ijcai-d/}
}