Wasserstein Divergence for GANs

Abstract

In many domains of computer vision, generative adversarial networks (GANs) have achieved great success, among which the family of Wasserstein GANs (WGANs) is considered to be state-of-the-art due to the theoretical contributions and competitive qualitative performance. However, it is very challenging to approximate the k-Lipschitz constraint required by the Wasserstein-1 metric (W-met). In this paper, we propose a novel Wasserstein divergence (W-div), which is a relaxed version of W-met and does not require the k-Lipschitz constraint. As a concrete application, we introduce a Wasserstein divergence objective for GANs (WGAN-div), which can faithfully approximate W-div through optimization. Under various settings, including progressive growing training, we demonstrate the stability of the proposed WGAN-div owing to its theoretical and practical advantages over WGANs. Also, we study the quantitative and visual performance of WGAN-div on standard image synthesis benchmarks, showing the superior performance of WGAN-div compared to the state-of-the-art methods.

Cite

Text

Wu et al. "Wasserstein Divergence for GANs." Proceedings of the European Conference on Computer Vision (ECCV), 2018. doi:10.1007/978-3-030-01228-1_40

Markdown

[Wu et al. "Wasserstein Divergence for GANs." Proceedings of the European Conference on Computer Vision (ECCV), 2018.](https://mlanthology.org/eccv/2018/wu2018eccv-wasserstein/) doi:10.1007/978-3-030-01228-1_40

BibTeX

@inproceedings{wu2018eccv-wasserstein,
  title     = {{Wasserstein Divergence for GANs}},
  author    = {Wu, Jiqing and Huang, Zhiwu and Thoma, Janine and Acharya, Dinesh and Van Gool, Luc},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2018},
  doi       = {10.1007/978-3-030-01228-1_40},
  url       = {https://mlanthology.org/eccv/2018/wu2018eccv-wasserstein/}
}