Learning with Minibatch Wasserstein : Asymptotic and Gradient Properties

Abstract

Optimal transport distances are powerful tools to compare probability distributions and have found many applications in machine learning. Yet their algorithmic complexity prevents their direct use on large scale datasets. To overcome this challenge, practitioners compute these distances on minibatches i.e., they average the outcome of several smaller optimal transport problems. We propose in this paper an analysis of this practice, which effects are not well understood so far. We notably argue that it is equivalent to an implicit regularization of the original problem, with appealing properties such as unbiased estimators, gradients and a concentration bound around the expectation, but also with defects such as loss of distance property. Along with this theoretical analysis, we also conduct empirical experiments on gradient flows, GANs or color transfer that highlight the practical interest of this strategy.

Cite

Text

Fatras et al. "Learning with Minibatch Wasserstein  : Asymptotic and Gradient Properties." Artificial Intelligence and Statistics, 2020.

Markdown

[Fatras et al. "Learning with Minibatch Wasserstein  : Asymptotic and Gradient Properties." Artificial Intelligence and Statistics, 2020.](https://mlanthology.org/aistats/2020/fatras2020aistats-learning/)

BibTeX

@inproceedings{fatras2020aistats-learning,
  title     = {{Learning with Minibatch Wasserstein  : Asymptotic and Gradient Properties}},
  author    = {Fatras, Kilian and Zine, Younes and Flamary, Rémi and Gribonval, Remi and Courty, Nicolas},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2020},
  pages     = {2131-2141},
  volume    = {108},
  url       = {https://mlanthology.org/aistats/2020/fatras2020aistats-learning/}
}