Rates of Convergence for Density Estimation with Generative Adversarial Networks
Abstract
In this work we undertake a thorough study of the non-asymptotic properties of the vanilla generative adversarial networks (GANs). We prove an oracle inequality for the Jensen-Shannon (JS) divergence between the underlying density $\mathsf{p}^*$ and the GAN estimate with a significantly better statistical error term compared to the previously known results. The advantage of our bound becomes clear in application to nonparametric density estimation. We show that the JS-divergence between the GAN estimate and $\mathsf{p}^*$ decays as fast as $(\log{n}/n)^{2\beta/(2\beta + d)}$, where $n$ is the sample size and $\beta$ determines the smoothness of $\mathsf{p}^*$. This rate of convergence coincides (up to logarithmic factors) with minimax optimal for the considered class of densities.
Cite
Text
Puchkin et al. "Rates of Convergence for Density Estimation with Generative Adversarial Networks." Journal of Machine Learning Research, 2024.Markdown
[Puchkin et al. "Rates of Convergence for Density Estimation with Generative Adversarial Networks." Journal of Machine Learning Research, 2024.](https://mlanthology.org/jmlr/2024/puchkin2024jmlr-rates/)BibTeX
@article{puchkin2024jmlr-rates,
title = {{Rates of Convergence for Density Estimation with Generative Adversarial Networks}},
author = {Puchkin, Nikita and Samsonov, Sergey and Belomestny, Denis and Moulines, Eric and Naumov, Alexey},
journal = {Journal of Machine Learning Research},
year = {2024},
pages = {1-47},
volume = {25},
url = {https://mlanthology.org/jmlr/2024/puchkin2024jmlr-rates/}
}