Tree-Sliced Variants of Wasserstein Distances
Abstract
Optimal transport (\OT) theory defines a powerful set of tools to compare probability distributions. \OT~suffers however from a few drawbacks, computational and statistical, which have encouraged the proposal of several regularized variants of OT in the recent literature, one of the most notable being the \textit{sliced} formulation, which exploits the closed-form formula between univariate distributions by projecting high-dimensional measures onto random lines. We consider in this work a more general family of ground metrics, namely \textit{tree metrics}, which also yield fast closed-form computations and negative definite, and of which the sliced-Wasserstein distance is a particular case (the tree is a chain). We propose the tree-sliced Wasserstein distance, computed by averaging the Wasserstein distance between these measures using random tree metrics, built adaptively in either low or high-dimensional spaces. Exploiting the negative definiteness of that distance, we also propose a positive definite kernel, and test it against other baselines on a few benchmark tasks.
Cite
Text
Le et al. "Tree-Sliced Variants of Wasserstein Distances." Neural Information Processing Systems, 2019.Markdown
[Le et al. "Tree-Sliced Variants of Wasserstein Distances." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/le2019neurips-treesliced/)BibTeX
@inproceedings{le2019neurips-treesliced,
title = {{Tree-Sliced Variants of Wasserstein Distances}},
author = {Le, Tam and Yamada, Makoto and Fukumizu, Kenji and Cuturi, Marco},
booktitle = {Neural Information Processing Systems},
year = {2019},
pages = {12304-12315},
url = {https://mlanthology.org/neurips/2019/le2019neurips-treesliced/}
}