Cramer-Wold Auto-Encoder

Abstract

The computation of the distance to the true distribution is a key component of most state-of-the-art generative models. Inspired by prior works on the Sliced-Wasserstein Auto-Encoders (SWAE) and the Wasserstein Auto-Encoders with MMD-based penalty (WAE-MMD), we propose a new generative model - a Cramer-Wold Auto-Encoder (CWAE). A fundamental component of CWAE is the characteristic kernel, the construction of which is one of the goals of this paper, from here on referred to as the Cramer-Wold kernel. Its main distinguishing feature is that it has a closed-form of the kernel product of radial Gaussians. Consequently, CWAE model has a~closed-form for the distance between the posterior and the normal prior, which simplifies the optimization procedure by removing the need to sample in order to compute the loss function. At the same time, CWAE performance often improves upon WAE-MMD and SWAE on standard benchmarks.

Cite

Text

Knop et al. "Cramer-Wold Auto-Encoder." Journal of Machine Learning Research, 2020.

Markdown

[Knop et al. "Cramer-Wold Auto-Encoder." Journal of Machine Learning Research, 2020.](https://mlanthology.org/jmlr/2020/knop2020jmlr-cramerwold/)

BibTeX

@article{knop2020jmlr-cramerwold,
  title     = {{Cramer-Wold Auto-Encoder}},
  author    = {Knop, Szymon and Spurek, Przemysław and Tabor, Jacek and Podolak, Igor and Mazur, Marcin and Jastrzębski, Stanisław},
  journal   = {Journal of Machine Learning Research},
  year      = {2020},
  pages     = {1-28},
  volume    = {21},
  url       = {https://mlanthology.org/jmlr/2020/knop2020jmlr-cramerwold/}
}