How Does Lipschitz Regularization Influence GAN Training?

Abstract

Despite the success of Lipschitz regularization in stabilizing GAN training, the exact reason of its effectiveness remains poorly understood. The direct effect of $K$-Lipschitz regularization is to restrict the $L2$-norm of the neural network gradient to be smaller than a threshold $K$ (e.g., $K=1$) such that $\| Lipschitz regularization ensures that all loss functions effectively work in the same way. Empirically, we verify our proposition on the MNIST, CIFAR10 and CelebA datasets.

Cite

Text

Qin et al. "How Does Lipschitz Regularization Influence GAN Training?." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58517-4_19

Markdown

[Qin et al. "How Does Lipschitz Regularization Influence GAN Training?." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/qin2020eccv-lipschitz/) doi:10.1007/978-3-030-58517-4_19

BibTeX

@inproceedings{qin2020eccv-lipschitz,
  title     = {{How Does Lipschitz Regularization Influence GAN Training?}},
  author    = {Qin, Yipeng and Mitra, Niloy and Wonka, Peter},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2020},
  doi       = {10.1007/978-3-030-58517-4_19},
  url       = {https://mlanthology.org/eccv/2020/qin2020eccv-lipschitz/}
}