The Shape of Data: Intrinsic Distance for Data Distributions
Abstract
The ability to represent and compare machine learning models is crucial in order to quantify subtle model changes, evaluate generative models, and gather insights on neural network architectures. Existing techniques for comparing data distributions focus on global data properties such as mean and covariance; in that sense, they are extrinsic and uni-scale. We develop a first-of-its-kind intrinsic and multi-scale method for characterizing and comparing data manifolds, using a lower-bound of the spectral variant of the Gromov-Wasserstein inter-manifold distance, which compares all data moments. In a thorough experimental study, we demonstrate that our method effectively discerns the structure of data manifolds even on unaligned data of different dimensionalities; moreover, we showcase its efficacy in evaluating the quality of generative models.
Cite
Text
Tsitsulin et al. "The Shape of Data: Intrinsic Distance for Data Distributions." International Conference on Learning Representations, 2020.Markdown
[Tsitsulin et al. "The Shape of Data: Intrinsic Distance for Data Distributions." International Conference on Learning Representations, 2020.](https://mlanthology.org/iclr/2020/tsitsulin2020iclr-shape/)BibTeX
@inproceedings{tsitsulin2020iclr-shape,
title = {{The Shape of Data: Intrinsic Distance for Data Distributions}},
author = {Tsitsulin, Anton and Munkhoeva, Marina and Mottin, Davide and Karras, Panagiotis and Bronstein, Alex and Oseledets, Ivan and Müller, Emmanuel},
booktitle = {International Conference on Learning Representations},
year = {2020},
url = {https://mlanthology.org/iclr/2020/tsitsulin2020iclr-shape/}
}