DyadGAN: Generating Facial Expressions in Dyadic Interactions
Abstract
Generative Adversarial Networks (GANs) have been shown to produce synthetic face images of compelling realism. In this work, we present a conditional GAN approach to generate contextually valid facial expressions in dyadic human interactions. In contrast to previous work employing conditions related to facial attributes of generated identities, we focused on dyads in an attempt to model the relationship and influence of one person's facial expressions in the reaction of the other. To this end, we introduced a two level optimization of GANs in interviewer-interviewee dyadic interactions. In the first stage we generate face sketches of the interviewer conditioned on facial expressions of the interviewee. The second stage synthesizes complete face images conditioned on the face sketches generated in the first stage. We demonstrated that our model is effective at generating visually compelling face images in dyadic interactions. Moreover we quantitatively showed that the facial expressions depicted in the generated interviewer face images reflect valid emotional reactions to the interviewee behavior.
Cite
Text
Huang and Khan. "DyadGAN: Generating Facial Expressions in Dyadic Interactions." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2017. doi:10.1109/CVPRW.2017.280Markdown
[Huang and Khan. "DyadGAN: Generating Facial Expressions in Dyadic Interactions." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2017.](https://mlanthology.org/cvprw/2017/huang2017cvprw-dyadgan/) doi:10.1109/CVPRW.2017.280BibTeX
@inproceedings{huang2017cvprw-dyadgan,
title = {{DyadGAN: Generating Facial Expressions in Dyadic Interactions}},
author = {Huang, Yuchi and Khan, Saad M.},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2017},
pages = {2259-2266},
doi = {10.1109/CVPRW.2017.280},
url = {https://mlanthology.org/cvprw/2017/huang2017cvprw-dyadgan/}
}