ReenactGAN: Learning to Reenact Faces via Boundary Transfer

Abstract

We present a novel learning-based framework for face reenactment. The proposed method, known as ReenactGAN, is capable of transferring facial movements and expressions from an arbitrary person’s monocular video input to a target person’s video. Instead of performing a direct transfer in the pixel space, which could result in structural artifacts, we first map the source face onto a boundary latent space. A transformer is subsequently used to adapt the source face’s boundary to the target’s boundary. Finally, a target-specific decoder is used to generate the reenacted target face. Thanks to the effective and reliable boundary-based transfer, our method can perform photo-realistic face reenactment. In addition, ReenactGAN is appealing in that the whole reenactment process is purely feed-forward, and thus the reenactment process can run in real-time (30 FPS on one GTX 1080 GPU). Dataset and model are publicly available on our project page.

Cite

Text

Wu et al. "ReenactGAN: Learning to Reenact Faces via Boundary Transfer." Proceedings of the European Conference on Computer Vision (ECCV), 2018. doi:10.1007/978-3-030-01246-5_37

Markdown

[Wu et al. "ReenactGAN: Learning to Reenact Faces via Boundary Transfer." Proceedings of the European Conference on Computer Vision (ECCV), 2018.](https://mlanthology.org/eccv/2018/wu2018eccv-reenactgan/) doi:10.1007/978-3-030-01246-5_37

BibTeX

@inproceedings{wu2018eccv-reenactgan,
  title     = {{ReenactGAN: Learning to Reenact Faces via Boundary Transfer}},
  author    = {Wu, Wayne and Zhang, Yunxuan and Li, Cheng and Qian, Chen and Change Loy, Chen},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2018},
  doi       = {10.1007/978-3-030-01246-5_37},
  url       = {https://mlanthology.org/eccv/2018/wu2018eccv-reenactgan/}
}