A Large-Scale Study on Regularization and Normalization in GANs

Abstract

Generative adversarial networks (GANs) are a class of deep generative models which aim to learn a target distribution in an unsupervised fashion. While they were successfully applied to many problems, training a GAN is a notoriously challenging task and requires a significant number of hyperparameter tuning, neural architecture engineering, and a non-trivial amount of “tricks". The success in many practical applications coupled with the lack of a measure to quantify the failure modes of GANs resulted in a plethora of proposed losses, regularization and normalization schemes, as well as neural architectures. In this work we take a sober view of the current state of GANs from a practical perspective. We discuss and evaluate common pitfalls and reproducibility issues, open-source our code on Github, and provide pre-trained models on TensorFlow Hub.

Cite

Text

Kurach et al. "A Large-Scale Study on Regularization and Normalization in GANs." International Conference on Machine Learning, 2019.

Markdown

[Kurach et al. "A Large-Scale Study on Regularization and Normalization in GANs." International Conference on Machine Learning, 2019.](https://mlanthology.org/icml/2019/kurach2019icml-largescale/)

BibTeX

@inproceedings{kurach2019icml-largescale,
  title     = {{A Large-Scale Study on Regularization and Normalization in GANs}},
  author    = {Kurach, Karol and Lučić, Mario and Zhai, Xiaohua and Michalski, Marcin and Gelly, Sylvain},
  booktitle = {International Conference on Machine Learning},
  year      = {2019},
  pages     = {3581-3590},
  volume    = {97},
  url       = {https://mlanthology.org/icml/2019/kurach2019icml-largescale/}
}