Generative Class-Conditional Autoencoders

Abstract

Recent work by Bengio et al. (2013) proposes a sampling procedure for denoising autoencoders which involves learning the transition operator of a Markov chain. The transition operator is typically unimodal, which limits its capacity to model complex data. In order to perform efficient sampling from conditional distributions, we extend this work, both theoretically and algorithmically, to gated autoencoders (Memisevic, 2013), The proposed model is able to generate convincing class-conditional samples when trained on both the MNIST and TFD datasets.

Cite

Text

Rudy and Taylor. "Generative Class-Conditional Autoencoders." International Conference on Learning Representations, 2015.

Markdown

[Rudy and Taylor. "Generative Class-Conditional Autoencoders." International Conference on Learning Representations, 2015.](https://mlanthology.org/iclr/2015/rudy2015iclr-generative/)

BibTeX

@inproceedings{rudy2015iclr-generative,
  title     = {{Generative Class-Conditional Autoencoders}},
  author    = {Rudy, Jan and Taylor, Graham W.},
  booktitle = {International Conference on Learning Representations},
  year      = {2015},
  url       = {https://mlanthology.org/iclr/2015/rudy2015iclr-generative/}
}