Wasserstein Auto-Encoders

Abstract

We propose the Wasserstein Auto-Encoder (WAE)---a new algorithm for building a generative model of the data distribution. WAE minimizes a penalized form of the Wasserstein distance between the model distribution and the target distribution, which leads to a different regularizer than the one used by the Variational Auto-Encoder (VAE). This regularizer encourages the encoded training distribution to match the prior. We compare our algorithm with several other techniques and show that it is a generalization of adversarial auto-encoders (AAE). Our experiments show that WAE shares many of the properties of VAEs (stable training, encoder-decoder architecture, nice latent manifold structure) while generating samples of better quality.

Cite

Text

Tolstikhin et al. "Wasserstein Auto-Encoders." International Conference on Learning Representations, 2018.

Markdown

[Tolstikhin et al. "Wasserstein Auto-Encoders." International Conference on Learning Representations, 2018.](https://mlanthology.org/iclr/2018/tolstikhin2018iclr-wasserstein/)

BibTeX

@inproceedings{tolstikhin2018iclr-wasserstein,
  title     = {{Wasserstein Auto-Encoders}},
  author    = {Tolstikhin, Ilya and Bousquet, Olivier and Gelly, Sylvain and Schoelkopf, Bernhard},
  booktitle = {International Conference on Learning Representations},
  year      = {2018},
  url       = {https://mlanthology.org/iclr/2018/tolstikhin2018iclr-wasserstein/}
}