Learning Latent Subspaces in Variational Autoencoders

Abstract

Variational autoencoders (VAEs) are widely used deep generative models capable of learning unsupervised latent representations of data. Such representations are often difficult to interpret or control. We consider the problem of unsupervised learning of features correlated to specific labels in a dataset. We propose a VAE-based generative model which we show is capable of extracting features correlated to binary labels in the data and structuring it in a latent subspace which is easy to interpret. Our model, the Conditional Subspace VAE (CSVAE), uses mutual information minimization to learn a low-dimensional latent subspace associated with each label that can easily be inspected and independently manipulated. We demonstrate the utility of the learned representations for attribute manipulation tasks on both the Toronto Face and CelebA datasets.

Cite

Text

Klys et al. "Learning Latent Subspaces in Variational Autoencoders." Neural Information Processing Systems, 2018.

Markdown

[Klys et al. "Learning Latent Subspaces in Variational Autoencoders." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/klys2018neurips-learning/)

BibTeX

@inproceedings{klys2018neurips-learning,
  title     = {{Learning Latent Subspaces in Variational Autoencoders}},
  author    = {Klys, Jack and Snell, Jake and Zemel, Richard},
  booktitle = {Neural Information Processing Systems},
  year      = {2018},
  pages     = {6444-6454},
  url       = {https://mlanthology.org/neurips/2018/klys2018neurips-learning/}
}