Single-Image Facial Expression Recognition Using Deep 3D Re-Centralization
Abstract
Facial expression recognition (FER) aims to encode expression information from faces. Previous studies often hold the assumption that human subjects should properly face the camera. Such a laboratory-controlled condition, however, is too rigid for in-wide applications. To tackle this issue, we propose a single image facial expression recognition method that is robust to face orientation and light conditions. We achieved this by proposing a novel face re-centralization method by reconstructing a 3D face model from a single image. We then propose a novel end-to-end deep neural network that utilizes both re-centralized 3D model and landmarks for FER task. A comprehensive evaluation on three real-world datasets illustrates that the proposed model outperforms the state-of-the-art techniques in both large-scale and small-scale datasets. The superiority of our model on effectiveness and robustness is also demonstrated in both laboratory conditions and wild images.
Cite
Text
Bao et al. "Single-Image Facial Expression Recognition Using Deep 3D Re-Centralization." IEEE/CVF International Conference on Computer Vision Workshops, 2019. doi:10.1109/ICCVW.2019.00202Markdown
[Bao et al. "Single-Image Facial Expression Recognition Using Deep 3D Re-Centralization." IEEE/CVF International Conference on Computer Vision Workshops, 2019.](https://mlanthology.org/iccvw/2019/bao2019iccvw-singleimage/) doi:10.1109/ICCVW.2019.00202BibTeX
@inproceedings{bao2019iccvw-singleimage,
title = {{Single-Image Facial Expression Recognition Using Deep 3D Re-Centralization}},
author = {Bao, Zhipeng and You, Shaodi and Gu, Lin and Yang, Zhenglu},
booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
year = {2019},
pages = {1628-1636},
doi = {10.1109/ICCVW.2019.00202},
url = {https://mlanthology.org/iccvw/2019/bao2019iccvw-singleimage/}
}