Collaging Class-Specific GANs for Semantic Image Synthesis
Abstract
We propose a new approach for high resolution semantic image synthesis. It consists of one base image generator and multiple class-specific generators. The base generator generates high quality images based on a segmentation map. To further improve the quality of different objects, we create a bank of Generative Adversarial Networks (GANs) by separately training class-specific models. This has several benefits including -- dedicated weights for each class; centrally aligned data for each model; additional training data from other sources, potential of higher resolution and quality; and easy manipulation of a specific object in the scene. Experiments show that our approach can generate high quality images in high resolution while having flexibility of object-level control by using class-specific generators.
Cite
Text
Li et al. "Collaging Class-Specific GANs for Semantic Image Synthesis." International Conference on Computer Vision, 2021. doi:10.1109/ICCV48922.2021.01415Markdown
[Li et al. "Collaging Class-Specific GANs for Semantic Image Synthesis." International Conference on Computer Vision, 2021.](https://mlanthology.org/iccv/2021/li2021iccv-collaging/) doi:10.1109/ICCV48922.2021.01415BibTeX
@inproceedings{li2021iccv-collaging,
title = {{Collaging Class-Specific GANs for Semantic Image Synthesis}},
author = {Li, Yuheng and Li, Yijun and Lu, Jingwan and Shechtman, Eli and Lee, Yong Jae and Singh, Krishna Kumar},
booktitle = {International Conference on Computer Vision},
year = {2021},
pages = {14418-14427},
doi = {10.1109/ICCV48922.2021.01415},
url = {https://mlanthology.org/iccv/2021/li2021iccv-collaging/}
}