VAE Learning via Stein Variational Gradient Descent

Abstract

A new method for learning variational autoencoders (VAEs) is developed, based on Stein variational gradient descent. A key advantage of this approach is that one need not make parametric assumptions about the form of the encoder distribution. Performance is further enhanced by integrating the proposed encoder with importance sampling. Excellent performance is demonstrated across multiple unsupervised and semi-supervised problems, including semi-supervised analysis of the ImageNet data, demonstrating the scalability of the model to large datasets.

Cite

Text

Pu et al. "VAE Learning via Stein Variational Gradient Descent." Neural Information Processing Systems, 2017.

Markdown

[Pu et al. "VAE Learning via Stein Variational Gradient Descent." Neural Information Processing Systems, 2017.](https://mlanthology.org/neurips/2017/pu2017neurips-vae/)

BibTeX

@inproceedings{pu2017neurips-vae,
  title     = {{VAE Learning via Stein Variational Gradient Descent}},
  author    = {Pu, Yuchen and Gan, Zhe and Henao, Ricardo and Li, Chunyuan and Han, Shaobo and Carin, Lawrence},
  booktitle = {Neural Information Processing Systems},
  year      = {2017},
  pages     = {4236-4245},
  url       = {https://mlanthology.org/neurips/2017/pu2017neurips-vae/}
}