Are GANs Created Equal? a Large-Scale Study
Abstract
Generative adversarial networks (GAN) are a powerful subclass of generative models. Despite a very rich research activity leading to numerous interesting GAN algorithms, it is still very hard to assess which algorithm(s) perform better than others. We conduct a neutral, multi-faceted large-scale empirical study on state-of-the art models and evaluation measures. We find that most models can reach similar scores with enough hyperparameter optimization and random restarts. This suggests that improvements can arise from a higher computational budget and tuning more than fundamental algorithmic changes. To overcome some limitations of the current metrics, we also propose several data sets on which precision and recall can be computed. Our experimental results suggest that future GAN research should be based on more systematic and objective evaluation procedures. Finally, we did not find evidence that any of the tested algorithms consistently outperforms the non-saturating GAN introduced in \cite{goodfellow2014generative}.
Cite
Text
Lucic et al. "Are GANs Created Equal? a Large-Scale Study." Neural Information Processing Systems, 2018.Markdown
[Lucic et al. "Are GANs Created Equal? a Large-Scale Study." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/lucic2018neurips-gans/)BibTeX
@inproceedings{lucic2018neurips-gans,
title = {{Are GANs Created Equal? a Large-Scale Study}},
author = {Lucic, Mario and Kurach, Karol and Michalski, Marcin and Gelly, Sylvain and Bousquet, Olivier},
booktitle = {Neural Information Processing Systems},
year = {2018},
pages = {700-709},
url = {https://mlanthology.org/neurips/2018/lucic2018neurips-gans/}
}