Wasserstein-Bounded Generative Adversarial Networks

Abstract

In the field of Generative Adversarial Networks (GANs), how to design a stable training strategy remains an open problem. Wasserstein GANs have largely promoted the stability over the original GANs by introducing Wasserstein distance, but still remain unstable and are prone to a variety of failure modes. In this paper, we present a general framework named Wasserstein-Bounded GAN (WBGAN), which improves a large family of WGAN-based approaches by simply adding an upper-bound constraint to the Wasserstein term. Furthermore, we show that WBGAN can reasonably measure the difference of distributions which almost have no intersection. Experiments demonstrate that WBGAN can stabilize as well as accelerate convergence in the training processes of a series of WGAN-based variants.

Cite

Text

Zhou et al. "Wasserstein-Bounded Generative Adversarial Networks." International Conference on Learning Representations, 2020.

Markdown

[Zhou et al. "Wasserstein-Bounded Generative Adversarial Networks." International Conference on Learning Representations, 2020.](https://mlanthology.org/iclr/2020/zhou2020iclr-wassersteinbounded/)

BibTeX

@inproceedings{zhou2020iclr-wassersteinbounded,
  title     = {{Wasserstein-Bounded Generative Adversarial Networks}},
  author    = {Zhou, Peng and Ni, Bingbing and Xie, Lingxi and Zhang, Xiaopeng and Wang, Hang and Geng, Cong and Tian, Qi},
  booktitle = {International Conference on Learning Representations},
  year      = {2020},
  url       = {https://mlanthology.org/iclr/2020/zhou2020iclr-wassersteinbounded/}
}