A Recurrent Encoder-Decoder Network for Sequential Face Alignment

Abstract

We propose a novel recurrent encoder-decoder network model for real-time video-based face alignment. Our proposed model predicts 2D facial point maps regularized by a regression loss, while uniquely exploiting recurrent learning at both spatial and temporal dimensions. At the spatial level, we add a feedback loop connection between the combined output response map and the input, in order to enable iterative coarse-to-fine face alignment using a single network model. At the temporal level, we first decouple the features in the bottleneck of the network into temporal-variant factors, such as pose and expression, and temporal-invariant factors, such as identity information. Temporal recurrent learning is then applied to the decoupled temporal-variant features, yielding better generalization and significantly more accurate results at test time. We perform a comprehensive experimental analysis, showing the importance of each component of our proposed model, as well as superior results over the state-of-the-art in standard datasets.

Cite

Text

Peng et al. "A Recurrent Encoder-Decoder Network for Sequential Face Alignment." European Conference on Computer Vision, 2016. doi:10.1007/978-3-319-46448-0_3

Markdown

[Peng et al. "A Recurrent Encoder-Decoder Network for Sequential Face Alignment." European Conference on Computer Vision, 2016.](https://mlanthology.org/eccv/2016/peng2016eccv-recurrent/) doi:10.1007/978-3-319-46448-0_3

BibTeX

@inproceedings{peng2016eccv-recurrent,
  title     = {{A Recurrent Encoder-Decoder Network for Sequential Face Alignment}},
  author    = {Peng, Xi and Feris, Rogério Schmidt and Wang, Xiaoyu and Metaxas, Dimitris N.},
  booktitle = {European Conference on Computer Vision},
  year      = {2016},
  pages     = {38-56},
  doi       = {10.1007/978-3-319-46448-0_3},
  url       = {https://mlanthology.org/eccv/2016/peng2016eccv-recurrent/}
}