Designing One Unified Framework for High-Fidelity Face Reenactment and Swapping
Abstract
Face reenactment and swapping share a similar identity and attribute manipulating pattern, but most methods treat them separately, which is redundant and practical-unfriendly. In this paper, we propose an effective end-to-end unified framework to achieve both tasks. Unlike existing methods that directly utilize pre-estimated structures and do not fully exploit their potential similarity, our model sufficiently transfers identity and attribute based on learned disentangled representations to generate high-fidelity faces. Specifically, Feature Disentanglement first disentangles identity and attribute unsupervisedly. Then the proposed Attribute Transfer (AttrT) employs learned Feature Displacement Fields to transfer the attribute granularly, and Identity Transfer (IdT) explicitly models identity-related feature interaction to adaptively control the identity fusion. We joint AttrT and IdT according to their intrinsic relationship to further facilitate each task, i.e., help improve identity consistency in reenactment and attribute preservation in swapping. Extensive experiments demonstrate the superiority of our method.
Cite
Text
Xu et al. "Designing One Unified Framework for High-Fidelity Face Reenactment and Swapping." Proceedings of the European Conference on Computer Vision (ECCV), 2022. doi:10.1007/978-3-031-19784-0_4Markdown
[Xu et al. "Designing One Unified Framework for High-Fidelity Face Reenactment and Swapping." Proceedings of the European Conference on Computer Vision (ECCV), 2022.](https://mlanthology.org/eccv/2022/xu2022eccv-designing/) doi:10.1007/978-3-031-19784-0_4BibTeX
@inproceedings{xu2022eccv-designing,
title = {{Designing One Unified Framework for High-Fidelity Face Reenactment and Swapping}},
author = {Xu, Chao and Zhang, Jiangning and Han, Yue and Tian, Guanzhong and Zeng, Xianfang and Tai, Ying and Wang, Yabiao and Wang, Chengjie and Liu, Yong},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2022},
doi = {10.1007/978-3-031-19784-0_4},
url = {https://mlanthology.org/eccv/2022/xu2022eccv-designing/}
}