Sliced Wasserstein Auto-Encoders

Abstract

In this paper we use the geometric properties of the optimal transport (OT) problem and the Wasserstein distances to define a prior distribution for the latent space of an auto-encoder. We introduce Sliced-Wasserstein Auto-Encoders (SWAE), that enable one to shape the distribution of the latent space into any samplable probability distribution without the need for training an adversarial network or having a likelihood function specified. In short, we regularize the auto-encoder loss with the sliced-Wasserstein distance between the distribution of the encoded training samples and a samplable prior distribution. We show that the proposed formulation has an efficient numerical solution that provides similar capabilities to Wasserstein Auto-Encoders (WAE) and Variational Auto-Encoders (VAE), while benefiting from an embarrassingly simple implementation. We provide extensive error analysis for our algorithm, and show its merits on three benchmark datasets.

Cite

Text

Kolouri et al. "Sliced Wasserstein Auto-Encoders." International Conference on Learning Representations, 2019.

Markdown

[Kolouri et al. "Sliced Wasserstein Auto-Encoders." International Conference on Learning Representations, 2019.](https://mlanthology.org/iclr/2019/kolouri2019iclr-sliced/)

BibTeX

@inproceedings{kolouri2019iclr-sliced,
  title     = {{Sliced Wasserstein Auto-Encoders}},
  author    = {Kolouri, Soheil and Pope, Phillip E. and Martin, Charles E. and Rohde, Gustavo K.},
  booktitle = {International Conference on Learning Representations},
  year      = {2019},
  url       = {https://mlanthology.org/iclr/2019/kolouri2019iclr-sliced/}
}