Adversarial Symmetric Variational Autoencoder

Abstract

A new form of variational autoencoder (VAE) is developed, in which the joint distribution of data and codes is considered in two (symmetric) forms: (i) from observed data fed through the encoder to yield codes, and (ii) from latent codes drawn from a simple prior and propagated through the decoder to manifest data. Lower bounds are learned for marginal log-likelihood fits observed data and latent codes. When learning with the variational bound, one seeks to minimize the symmetric Kullback-Leibler divergence of joint density functions from (i) and (ii), while simultaneously seeking to maximize the two marginal log-likelihoods. To facilitate learning, a new form of adversarial training is developed. An extensive set of experiments is performed, in which we demonstrate state-of-the-art data reconstruction and generation on several image benchmarks datasets.

Cite

Text

Pu et al. "Adversarial Symmetric Variational Autoencoder." Neural Information Processing Systems, 2017.

Markdown

[Pu et al. "Adversarial Symmetric Variational Autoencoder." Neural Information Processing Systems, 2017.](https://mlanthology.org/neurips/2017/pu2017neurips-adversarial/)

BibTeX

@inproceedings{pu2017neurips-adversarial,
  title     = {{Adversarial Symmetric Variational Autoencoder}},
  author    = {Pu, Yuchen and Wang, Weiyao and Henao, Ricardo and Chen, Liqun and Gan, Zhe and Li, Chunyuan and Carin, Lawrence},
  booktitle = {Neural Information Processing Systems},
  year      = {2017},
  pages     = {4330-4339},
  url       = {https://mlanthology.org/neurips/2017/pu2017neurips-adversarial/}
}