ELeGANt: An Euler-Lagrange Analysis of Wasserstein Generative Adversarial Networks
Abstract
We consider Wasserstein generative adversarial networks (WGAN) with a gradient-norm penalty and analyze the underlying functional optimization problem within a variational setting. The optimal discriminator in this setting is the solution to a Poisson differential equation, and can be obtained in closed form without having to train a neural network. We illustrate this by employing a Fourier-series approximation to solve the Poisson differential equation. Experimental results based on synthesized low-dimensional Gaussian data demonstrate superior convergence behavior of the proposed approach in comparison with the baseline WGAN variants that employ weight-clipping, gradient or Lipschitz penalties on the discriminator. Further, within this setting, the optimal Lagrange multiplier can be computed in closed-form, and serves as a proxy for measuring GAN generator convergence. This work is an extended abstract, summarizing Asokan & Seelamantula (2023).
Cite
Text
Asokan and Seelamantula. "ELeGANt: An Euler-Lagrange Analysis of Wasserstein Generative Adversarial Networks." NeurIPS 2023 Workshops: DLDE, 2023.Markdown
[Asokan and Seelamantula. "ELeGANt: An Euler-Lagrange Analysis of Wasserstein Generative Adversarial Networks." NeurIPS 2023 Workshops: DLDE, 2023.](https://mlanthology.org/neuripsw/2023/asokan2023neuripsw-elegant/)BibTeX
@inproceedings{asokan2023neuripsw-elegant,
title = {{ELeGANt: An Euler-Lagrange Analysis of Wasserstein Generative Adversarial Networks}},
author = {Asokan, Siddarth and Seelamantula, Chandra Sekhar},
booktitle = {NeurIPS 2023 Workshops: DLDE},
year = {2023},
url = {https://mlanthology.org/neuripsw/2023/asokan2023neuripsw-elegant/}
}