On the Gradient Formula for Learning Generative Models with Regularized Optimal Transport Costs

Abstract

Learning a Wasserstein Generative Adversarial Networks (WGAN) requires the differentiation of the optimal transport cost with respect to the parameters of the generative model. In this work, we provide sufficient conditions for the existence of a gradient formula in two different frameworks: the case of semi-discrete optimal transport (i.e. with a discrete target distribution) and the case of regularized optimal transport (i.e. with an entropic penalty). In both cases the gradient formula involves a solution of the semi-dual formulation of the optimal transport cost. Our study makes a connection between the gradient of the WGAN loss function and the Laguerre diagrams associated to semi-discrete transport maps. The learning problem is addressed with an alternating algorithm, which is in general not convergent. However, in most cases, it stabilizes close to a relevant solution for the generative learning problem. We also show that entropic regularization can improve the convergence speed but noticeably changes the shape of the learned generative model.

Cite

Text

Houdard et al. "On the Gradient Formula for Learning Generative Models with Regularized Optimal Transport Costs." Transactions on Machine Learning Research, 2023.

Markdown

[Houdard et al. "On the Gradient Formula for Learning Generative Models with Regularized Optimal Transport Costs." Transactions on Machine Learning Research, 2023.](https://mlanthology.org/tmlr/2023/houdard2023tmlr-gradient/)

BibTeX

@article{houdard2023tmlr-gradient,
  title     = {{On the Gradient Formula for Learning Generative Models with Regularized Optimal Transport Costs}},
  author    = {Houdard, Antoine and Leclaire, Arthur and Papadakis, Nicolas and Rabin, Julien},
  journal   = {Transactions on Machine Learning Research},
  year      = {2023},
  url       = {https://mlanthology.org/tmlr/2023/houdard2023tmlr-gradient/}
}