Face Adapter for Pre-Trained Diffusion Models with Fine-Grained ID and Attribute Control

Abstract

Current face reenactment and swapping methods mainly rely on GAN frameworks, but recent focus has shifted to pre-trained diffusion models for their superior generation capabilities. However, training these models is resource-intensive, and the results have not yet achieved satisfactory performance levels. To address this issue, we introduce Face-Adapter, an efficient and effective adapter designed for high-precision and high-fidelity face editing for pre-trained diffusion models. We observe that both face reenactment/swapping tasks essentially involve combinations of target structure, ID and attribute. We aim to sufficiently decouple the control of these factors to achieve both tasks in one model. Specifically, our method contains: 1) A Spatial Condition Generator that provides precise landmarks and background; 2) A Plug-and-play Identity Encoder that transfers face embeddings to the text space by a transformer decoder. 3) An Attribute Controller that integrates spatial conditions and detailed attributes. Face-Adapter achieves comparable or even superior performance in terms of motion control precision, ID retention capability, and generation quality compared to fully fine-tuned face reenactment/swapping models. Additionally, Face-Adapter seamlessly integrates with various StableDiffusion models.

Cite

Text

Han et al. "Face Adapter for Pre-Trained Diffusion Models with Fine-Grained ID and Attribute Control." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-72973-7_2

Markdown

[Han et al. "Face Adapter for Pre-Trained Diffusion Models with Fine-Grained ID and Attribute Control." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/han2024eccv-face/) doi:10.1007/978-3-031-72973-7_2

BibTeX

@inproceedings{han2024eccv-face,
  title     = {{Face Adapter for Pre-Trained Diffusion Models with Fine-Grained ID and Attribute Control}},
  author    = {Han, Yue and Zhu, Junwei and He, Keke and Chen, Xu and Ge, Yanhao and Li, Wei and Li, Xiangtai and Zhang, Jiangning and Wang, Chengjie and Liu, Yong},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2024},
  doi       = {10.1007/978-3-031-72973-7_2},
  url       = {https://mlanthology.org/eccv/2024/han2024eccv-face/}
}