Sinkhorn Barycenters with Free Support via Frank-Wolfe Algorithm

Abstract

We present a novel algorithm to estimate the barycenter of arbitrary probability distributions with respect to the Sinkhorn divergence. Based on a Frank-Wolfe optimization strategy, our approach proceeds by populating the support of the barycenter incrementally, without requiring any pre-allocation. We consider discrete as well as continuous distributions, proving convergence rates of the proposed algorithm in both settings. Key elements of our analysis are a new result showing that the Sinkhorn divergence on compact domains has Lipschitz continuous gradient with respect to the Total Variation and a characterization of the sample complexity of Sinkhorn potentials. Experiments validate the effectiveness of our method in practice.

Cite

Text

Luise et al. "Sinkhorn Barycenters with Free Support via Frank-Wolfe Algorithm." Neural Information Processing Systems, 2019.

Markdown

[Luise et al. "Sinkhorn Barycenters with Free Support via Frank-Wolfe Algorithm." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/luise2019neurips-sinkhorn/)

BibTeX

@inproceedings{luise2019neurips-sinkhorn,
  title     = {{Sinkhorn Barycenters with Free Support via Frank-Wolfe Algorithm}},
  author    = {Luise, Giulia and Salzo, Saverio and Pontil, Massimiliano and Ciliberto, Carlo},
  booktitle = {Neural Information Processing Systems},
  year      = {2019},
  pages     = {9322-9333},
  url       = {https://mlanthology.org/neurips/2019/luise2019neurips-sinkhorn/}
}