EmoStyle: One-Shot Facial Expression Editing Using Continuous Emotion Parameters
Abstract
Recent studies have achieved impressive results in face generation and editing of facial expressions. However, existing approaches either generate a discrete number of facial expressions or have limited control over the emotion of the output image. To overcome this limitation, we introduced EmoStyle, a method to edit facial expressions based on valence and arousal, two continuous emotional parameters that can specify a broad range of emotions. EmoStyle is designed to separate emotions from other facial characteristics and to edit the face to display a desired emotion. We employ the pre-trained generator from StyleGAN2, taking advantage of its rich latent space. We also proposed an adapted inversion method to be able to apply our system on out-of-StyleGAN2 domain images in a one-shot manner. The qualitative and quantitative evaluations show that our approach has the capability to synthesize a wide range of expressions to output high-resolution images.
Cite
Text
Azari and Lim. "EmoStyle: One-Shot Facial Expression Editing Using Continuous Emotion Parameters." Winter Conference on Applications of Computer Vision, 2024.Markdown
[Azari and Lim. "EmoStyle: One-Shot Facial Expression Editing Using Continuous Emotion Parameters." Winter Conference on Applications of Computer Vision, 2024.](https://mlanthology.org/wacv/2024/azari2024wacv-emostyle/)BibTeX
@inproceedings{azari2024wacv-emostyle,
title = {{EmoStyle: One-Shot Facial Expression Editing Using Continuous Emotion Parameters}},
author = {Azari, Bita and Lim, Angelica},
booktitle = {Winter Conference on Applications of Computer Vision},
year = {2024},
pages = {6385-6394},
url = {https://mlanthology.org/wacv/2024/azari2024wacv-emostyle/}
}