MIXGAN: Learning Concepts from Different Domains for Mixture Generation
Abstract
In this work, we present an interesting attempt on mixture generation: absorbing different image concepts (e.g., content and style) from different domains and thus generating a new domain with learned concepts. In particular, we propose a mixture generative adversarial network (MIXGAN). MIXGAN learns concepts of content and style from two domains respectively, and thus can join them for mixture generation in a new domain, i.e., generating images with content from one domain and style from another. MIXGAN overcomes the limitation of current GAN-based models which either generate new images in the same domain as they observed in training stage, or require off-the-shelf content templates for transferring or translation. Extensive experimental results demonstrate the effectiveness of MIXGAN as compared to related state-of-the-art GAN-based models.
Cite
Text
Hao et al. "MIXGAN: Learning Concepts from Different Domains for Mixture Generation." International Joint Conference on Artificial Intelligence, 2018. doi:10.24963/IJCAI.2018/306Markdown
[Hao et al. "MIXGAN: Learning Concepts from Different Domains for Mixture Generation." International Joint Conference on Artificial Intelligence, 2018.](https://mlanthology.org/ijcai/2018/hao2018ijcai-mixgan/) doi:10.24963/IJCAI.2018/306BibTeX
@inproceedings{hao2018ijcai-mixgan,
title = {{MIXGAN: Learning Concepts from Different Domains for Mixture Generation}},
author = {Hao, Guang-Yuan and Yu, Hong-Xing and Zheng, Wei-Shi},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2018},
pages = {2212-2219},
doi = {10.24963/IJCAI.2018/306},
url = {https://mlanthology.org/ijcai/2018/hao2018ijcai-mixgan/}
}