Instance-Conditioned GAN
Abstract
Generative Adversarial Networks (GANs) can generate near photo realistic images in narrow domains such as human faces. Yet, modeling complex distributions of datasets such as ImageNet and COCO-Stuff remains challenging in unconditional settings. In this paper, we take inspiration from kernel density estimation techniques and introduce a non-parametric approach to modeling distributions of complex datasets. We partition the data manifold into a mixture of overlapping neighborhoods described by a datapoint and its nearest neighbors, and introduce a model, called instance-conditioned GAN (IC-GAN), which learns the distribution around each datapoint. Experimental results on ImageNet and COCO-Stuff show that IC-GAN significantly improves over unconditional models and unsupervised data partitioning baselines. Moreover, we show that IC-GAN can effortlessly transfer to datasets not seen during training by simply changing the conditioning instances, and still generate realistic images. Finally, we extend IC-GAN to the class-conditional case and show semantically controllable generation and competitive quantitative results on ImageNet; while improving over BigGAN on ImageNet-LT. Code and trained models to reproduce the reported results are available at https://github.com/facebookresearch/ic_gan.
Cite
Text
Casanova et al. "Instance-Conditioned GAN." Neural Information Processing Systems, 2021.Markdown
[Casanova et al. "Instance-Conditioned GAN." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/casanova2021neurips-instanceconditioned/)BibTeX
@inproceedings{casanova2021neurips-instanceconditioned,
title = {{Instance-Conditioned GAN}},
author = {Casanova, Arantxa and Careil, Marlene and Verbeek, Jakob J. and Drozdzal, Michal and Soriano, Adriana Romero},
booktitle = {Neural Information Processing Systems},
year = {2021},
url = {https://mlanthology.org/neurips/2021/casanova2021neurips-instanceconditioned/}
}