LSGANs with Gradient Regularizers Are Smooth High-Dimensional Interpolators
Abstract
We consider the problem of discriminator optimization in least-squares generative adversarial networks (LSGANs) subject to higher-order gradient regularization enforced on the convex hull of all possible interpolation points between the target (real) and generated (fake) data. We analyze the proposed LSGAN cost within a variational framework, and show that the optimal discriminator solves a regularized least-squares problem, and can be represented through a polyharmonic radial basis function (RBF) interpolator. The optimal RBF discriminator can be implemented in closed-form, with the weights computed by solving a linear system of equations. We validate the proposed approach on synthetic Gaussian and standard image datasets. While the optimal LSGAN discriminator leads to a superior convergence on Gaussian data, the inherent low-dimensional manifold structure of images makes the implementation of the optimal discriminator ill-posed. Nevertheless, replacing the trainable discriminator network with a closed-form RBF interpolator results in superior convergence on 2-D Gaussian data, while overcoming pitfalls in GAN training, namely mode dropping and mode collapse.
Cite
Text
Asokan and Seelamantula. "LSGANs with Gradient Regularizers Are Smooth High-Dimensional Interpolators." NeurIPS 2022 Workshops: INTERPOLATE, 2022.Markdown
[Asokan and Seelamantula. "LSGANs with Gradient Regularizers Are Smooth High-Dimensional Interpolators." NeurIPS 2022 Workshops: INTERPOLATE, 2022.](https://mlanthology.org/neuripsw/2022/asokan2022neuripsw-lsgans/)BibTeX
@inproceedings{asokan2022neuripsw-lsgans,
title = {{LSGANs with Gradient Regularizers Are Smooth High-Dimensional Interpolators}},
author = {Asokan, Siddarth and Seelamantula, Chandra Sekhar},
booktitle = {NeurIPS 2022 Workshops: INTERPOLATE},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/asokan2022neuripsw-lsgans/}
}