Variational Autoencoder for Deep Learning of Images, Labels and Captions

Abstract

A novel variational autoencoder is developed to model images, as well as associated labels or captions. The Deep Generative Deconvolutional Network (DGDN) is used as a decoder of the latent image features, and a deep Convolutional Neural Network (CNN) is used as an image encoder; the CNN is used to approximate a distribution for the latent DGDN features/code. The latent code is also linked to generative models for labels (Bayesian support vector machine) or captions (recurrent neural network). When predicting a label/caption for a new image at test, averaging is performed across the distribution of latent codes; this is computationally efficient as a consequence of the learned CNN-based encoder. Since the framework is capable of modeling the image in the presence/absence of associated labels/captions, a new semi-supervised setting is manifested for CNN learning with images; the framework even allows unsupervised CNN learning, based on images alone.

Cite

Text

Pu et al. "Variational Autoencoder for Deep Learning of Images, Labels and Captions." Neural Information Processing Systems, 2016.

Markdown

[Pu et al. "Variational Autoencoder for Deep Learning of Images, Labels and Captions." Neural Information Processing Systems, 2016.](https://mlanthology.org/neurips/2016/pu2016neurips-variational/)

BibTeX

@inproceedings{pu2016neurips-variational,
  title     = {{Variational Autoencoder for Deep Learning of Images, Labels and Captions}},
  author    = {Pu, Yunchen and Gan, Zhe and Henao, Ricardo and Yuan, Xin and Li, Chunyuan and Stevens, Andrew and Carin, Lawrence},
  booktitle = {Neural Information Processing Systems},
  year      = {2016},
  pages     = {2352-2360},
  url       = {https://mlanthology.org/neurips/2016/pu2016neurips-variational/}
}