Latent Bernoulli Autoencoder

Abstract

In this work, we pose the question whether it is possible to design and train an autoencoder model in an end-to-end fashion to learn representations in the multivariate Bernoulli latent space, and achieve performance comparable with the state-of-the-art variational methods. Moreover, we investigate how to generate novel samples and perform smooth interpolation and attributes modification in the binary latent space. To meet our objective, we propose a simplified, deterministic model with a straight-through gradient estimator to learn the binary latents and show its competitiveness with the latest VAE methods. Furthermore, we propose a novel method based on a random hyperplane rounding for sampling and smooth interpolation in the latent space. Our method performs on a par or better than the current state-of-the-art methods on common CelebA, CIFAR-10 and MNIST datasets.

Cite

Text

Fajtl et al. "Latent Bernoulli Autoencoder." International Conference on Machine Learning, 2020.

Markdown

[Fajtl et al. "Latent Bernoulli Autoencoder." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/fajtl2020icml-latent/)

BibTeX

@inproceedings{fajtl2020icml-latent,
  title     = {{Latent Bernoulli Autoencoder}},
  author    = {Fajtl, Jiri and Argyriou, Vasileios and Monekosso, Dorothy and Remagnino, Paolo},
  booktitle = {International Conference on Machine Learning},
  year      = {2020},
  pages     = {2964-2974},
  volume    = {119},
  url       = {https://mlanthology.org/icml/2020/fajtl2020icml-latent/}
}