Improving the Improved Training of Wasserstein GANs: A Consistency Term and Its Dual Effect

Abstract

Despite being impactful on a variety of problems and applications, the generative adversarial nets (GANs) are remarkably difficult to train. This issue is formally analyzed by \cite{arjovsky2017towards}, who also propose an alternative direction to avoid the caveats in the minmax two-player training of GANs. The corresponding algorithm, namely, Wasserstein GAN (WGAN) hinges on the 1-Lipschitz continuity of the discriminators. In this paper, we propose a novel approach for enforcing the Lipschitz continuity in the training procedure of WGANs. Our approach seamlessly connects WGAN with one of the recent semi-supervised learning approaches. As a result, it gives rise to not only better photo-realistic samples than the previous methods but also state-of-the-art semi-supervised learning results. In particular, to the best of our knowledge, our approach gives rise to the inception score of more than 5.0 with only 1,000 CIFAR10 images and is the first that exceeds the accuracy of 90\% the CIFAR10 datasets using only 4,000 labeled images.

Cite

Text

Wei et al. "Improving the Improved Training of Wasserstein GANs: A Consistency Term and Its Dual Effect." International Conference on Learning Representations, 2018.

Markdown

[Wei et al. "Improving the Improved Training of Wasserstein GANs: A Consistency Term and Its Dual Effect." International Conference on Learning Representations, 2018.](https://mlanthology.org/iclr/2018/wei2018iclr-improving/)

BibTeX

@inproceedings{wei2018iclr-improving,
  title     = {{Improving the Improved Training of Wasserstein GANs: A Consistency Term and Its Dual Effect}},
  author    = {Wei, Xiang and Gong, Boqing and Liu, Zixia and Lu, Wei and Wang, Liqiang},
  booktitle = {International Conference on Learning Representations},
  year      = {2018},
  url       = {https://mlanthology.org/iclr/2018/wei2018iclr-improving/}
}