An Error Analysis of Generative Adversarial Networks for Learning Distributions

Abstract

This paper studies how well generative adversarial networks (GANs) learn probability distributions from finite samples. Our main results establish the convergence rates of GANs under a collection of integral probability metrics defined through H\"older classes, including the Wasserstein distance as a special case. We also show that GANs are able to adaptively learn data distributions with low-dimensional structures or have H\"older densities, when the network architectures are chosen properly. In particular, for distributions concentrated around a low-dimensional set, we show that the learning rates of GANs do not depend on the high ambient dimension, but on the lower intrinsic dimension. Our analysis is based on a new oracle inequality decomposing the estimation error into the generator and discriminator approximation error and the statistical error, which may be of independent interest.

Cite

Text

Huang et al. "An Error Analysis of Generative Adversarial Networks for Learning Distributions." Journal of Machine Learning Research, 2022.

Markdown

[Huang et al. "An Error Analysis of Generative Adversarial Networks for Learning Distributions." Journal of Machine Learning Research, 2022.](https://mlanthology.org/jmlr/2022/huang2022jmlr-error/)

BibTeX

@article{huang2022jmlr-error,
  title     = {{An Error Analysis of Generative Adversarial Networks for Learning Distributions}},
  author    = {Huang, Jian and Jiao, Yuling and Li, Zhen and Liu, Shiao and Wang, Yang and Yang, Yunfei},
  journal   = {Journal of Machine Learning Research},
  year      = {2022},
  pages     = {1-43},
  volume    = {23},
  url       = {https://mlanthology.org/jmlr/2022/huang2022jmlr-error/}
}