DigGAN: Discriminator gradIent Gap Regularization for GAN Training with Limited Data

Abstract

Generative adversarial nets (GANs) have been remarkably successful at learning to sample from distributions specified by a given dataset, particularly if the given dataset is reasonably large compared to its dimensionality. However, given limited data, classical GANs have struggled, and strategies like output-regularization, data-augmentation, use of pre-trained models and pruning have been shown to lead to improvements. Notably, the applicability of these strategies is often constrained to particular settings, e.g., availability of a pretrained GAN, or increases training time, e.g., when using pruning. In contrast, we propose a Discriminator gradIent Gap regularized GAN (DigGAN) formulation which can be added to any existing GAN. DigGAN augments existing GANs by encouraging to narrow the gap between the norm of the gradient of a discriminator's prediction w.r.t. real images and w.r.t. the generated samples. We observe this formulation to avoid bad attractors within the GAN loss landscape, and we find DigGAN to significantly improve the results of GAN training when limited data is available.

Cite

Text

Fang et al. "DigGAN: Discriminator gradIent Gap Regularization for GAN Training with Limited Data." Neural Information Processing Systems, 2022.

Markdown

[Fang et al. "DigGAN: Discriminator gradIent Gap Regularization for GAN Training with Limited Data." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/fang2022neurips-diggan/)

BibTeX

@inproceedings{fang2022neurips-diggan,
  title     = {{DigGAN: Discriminator gradIent Gap Regularization for GAN Training with Limited Data}},
  author    = {Fang, Tiantian and Sun, Ruoyu and Schwing, Alex},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/fang2022neurips-diggan/}
}