Generator Knows What Discriminator Should Learn in Unconditional GANs

Abstract

Recent methods for conditional image generation benefit from dense supervision such as segmentation label maps to achieve high-fidelity. However, it is rarely explored to employ dense supervision for unconditional image generation. Here we explore the efficacy of dense supervision in unconditional generation and find generator feature maps can be an alternative of cost-expensive semantic label maps. From our empirical evidences, we propose a new generator-guided discriminator regularization (GGDR) in which the generator feature maps supervise the discriminator to have rich semantic representations in unconditional generation. In specific, we employ an U-Net architecture for discriminator, which is trained to predict the generator feature maps given fake images as inputs. Extensive experiments on mulitple datasets show that our GGDR consistently improves the performance of baseline methods in terms of quantitative and qualitative aspects. Code is available at https://github.com/naver-ai/GGDR.

Cite

Text

Lee et al. "Generator Knows What Discriminator Should Learn in Unconditional GANs." Proceedings of the European Conference on Computer Vision (ECCV), 2022. doi:10.1007/978-3-031-19790-1_25

Markdown

[Lee et al. "Generator Knows What Discriminator Should Learn in Unconditional GANs." Proceedings of the European Conference on Computer Vision (ECCV), 2022.](https://mlanthology.org/eccv/2022/lee2022eccv-generator/) doi:10.1007/978-3-031-19790-1_25

BibTeX

@inproceedings{lee2022eccv-generator,
  title     = {{Generator Knows What Discriminator Should Learn in Unconditional GANs}},
  author    = {Lee, Gayoung and Kim, Hyunsu and Kim, Junho and Kim, Seonghyeon and Ha, Jung-Woo and Choi, Yunjey},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2022},
  doi       = {10.1007/978-3-031-19790-1_25},
  url       = {https://mlanthology.org/eccv/2022/lee2022eccv-generator/}
}