SGD Learns One-Layer Networks in WGANs

Abstract

Generative adversarial networks (GANs) are a widely used framework for learning generative models. Wasserstein GANs (WGANs), one of the most successful variants of GANs, require solving a minmax optimization problem to global optimality, but are in practice successfully trained using stochastic gradient descent-ascent. In this paper, we show that, when the generator is a one-layer network, stochastic gradient descent-ascent converges to a global solution with polynomial time and sample complexity.

Cite

Text

Lei et al. "SGD Learns One-Layer Networks in WGANs." International Conference on Machine Learning, 2020.

Markdown

[Lei et al. "SGD Learns One-Layer Networks in WGANs." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/lei2020icml-sgd/)

BibTeX

@inproceedings{lei2020icml-sgd,
  title     = {{SGD Learns One-Layer Networks in WGANs}},
  author    = {Lei, Qi and Lee, Jason and Dimakis, Alex and Daskalakis, Constantinos},
  booktitle = {International Conference on Machine Learning},
  year      = {2020},
  pages     = {5799-5808},
  volume    = {119},
  url       = {https://mlanthology.org/icml/2020/lei2020icml-sgd/}
}