Spatially-Invariant Style-Codes Controlled Makeup Transfer
Abstract
Transferring makeup from the misaligned reference image is challenging. Previous methods overcome this barrier by computing pixel-wise correspondences between two images, which is inaccurate and computational-expensive. In this paper, we take a different perspective to break down the makeup transfer problem into a two-step extraction-assignment process. To this end, we propose a Style-based Controllable GAN model that consists of three components, each of which corresponds to target style-code encoding, face identity features extraction, and makeup fusion, respectively. In particular, a Part-specific Style Encoder encodes the component-wise makeup style of the reference image into a style-code in an intermediate latent space W. The style-code discards spatial information and therefore is invariant to spatial misalignment. On the other hand, the style-code embeds component-wise information, enabling flexible partial makeup editing from multiple references. This style-code, together with source identity features, are integrated to a Makeup Fusion Decoder equipped with multiple AdaIN layers to generate the final result. Our proposed method demonstrates great flexibility on makeup transfer by supporting makeup removal, shade-controllable makeup transfer, and part-specific makeup transfer, even with large spatial misalignment. Extensive experiments demonstrate the superiority of our approach over state-of-the-art methods. Code is available at https://github.com/makeuptransfer/SCGAN.
Cite
Text
Deng et al. "Spatially-Invariant Style-Codes Controlled Makeup Transfer." Conference on Computer Vision and Pattern Recognition, 2021. doi:10.1109/CVPR46437.2021.00648Markdown
[Deng et al. "Spatially-Invariant Style-Codes Controlled Makeup Transfer." Conference on Computer Vision and Pattern Recognition, 2021.](https://mlanthology.org/cvpr/2021/deng2021cvpr-spatiallyinvariant/) doi:10.1109/CVPR46437.2021.00648BibTeX
@inproceedings{deng2021cvpr-spatiallyinvariant,
title = {{Spatially-Invariant Style-Codes Controlled Makeup Transfer}},
author = {Deng, Han and Han, Chu and Cai, Hongmin and Han, Guoqiang and He, Shengfeng},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2021},
pages = {6549-6557},
doi = {10.1109/CVPR46437.2021.00648},
url = {https://mlanthology.org/cvpr/2021/deng2021cvpr-spatiallyinvariant/}
}