Domain-Specific Mappings for Generative Adversarial Style Transfer
Abstract
Style transfer generates an image whose content comes from one image and style from the other. Image-to-image translation approaches with disentangled representations have been shown effective for style transfer between two image categories. However, previous methods often assume a shared domain-invariant content space, which could compromise the content representation power. For addressing this issue, this paper leverages domain-specific mappings for remapping latent features in the shared content space to domain-specific content spaces. This way, images can be encoded more properly for style transfer. Experiments show that the proposed method outperforms previous style transfer methods, particularly on challenging scenarios that would require semantic correspondences between images. Code and results are available at https://github.com/acht7111020/DSMAP.
Cite
Text
Chang et al. "Domain-Specific Mappings for Generative Adversarial Style Transfer." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58598-3_34Markdown
[Chang et al. "Domain-Specific Mappings for Generative Adversarial Style Transfer." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/chang2020eccv-domainspecific/) doi:10.1007/978-3-030-58598-3_34BibTeX
@inproceedings{chang2020eccv-domainspecific,
title = {{Domain-Specific Mappings for Generative Adversarial Style Transfer}},
author = {Chang, Hsin-Yu and Wang, Zhixiang and Chuang, Yung-Yu},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2020},
doi = {10.1007/978-3-030-58598-3_34},
url = {https://mlanthology.org/eccv/2020/chang2020eccv-domainspecific/}
}