Semi-Supervised Eye Makeup Transfer by Swapping Learned Representation

Abstract

This paper introduces an autoencoder structure to transfer the eye makeup from an arbitrary reference image to a source image realistically and faithfully using both synthetic paired data and unpaired data in a semi-supervised way. Different from the image domain transfer problem, our framework only needs one domain entity and follows an "encoding-swap-decoding" process. Makeup transfer is achieved by decoding the base representation from a source image and makeup representation from a reference image. Moreover, our method allows users to control the makeup degree by tuning makeup weight. To the best of our knowledge, there is no public large makeup dataset to evaluate data-driven approaches. We have collected a dataset of non-makeup images and with-makeup images of various eye makeup styles. Experiments demonstrate the effectiveness of our method with the state-of-the-art methods both qualitatively and quantitatively.

Cite

Text

Zhu et al. "Semi-Supervised Eye Makeup Transfer by Swapping Learned Representation." IEEE/CVF International Conference on Computer Vision Workshops, 2019. doi:10.1109/ICCVW.2019.00479

Markdown

[Zhu et al. "Semi-Supervised Eye Makeup Transfer by Swapping Learned Representation." IEEE/CVF International Conference on Computer Vision Workshops, 2019.](https://mlanthology.org/iccvw/2019/zhu2019iccvw-semisupervised/) doi:10.1109/ICCVW.2019.00479

BibTeX

@inproceedings{zhu2019iccvw-semisupervised,
  title     = {{Semi-Supervised Eye Makeup Transfer by Swapping Learned Representation}},
  author    = {Zhu, Feida and Cao, Hongji and Feng, Zunlei and Zhang, Yongqiang and Luo, Wenbin and Zhou, Hucheng and Song, Mingli and Ma, Kai-Kuang},
  booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
  year      = {2019},
  pages     = {3858-3867},
  doi       = {10.1109/ICCVW.2019.00479},
  url       = {https://mlanthology.org/iccvw/2019/zhu2019iccvw-semisupervised/}
}