Variational Autoencoders Trained with Q-Deformed Lower Bounds

Abstract

Variational autoencoders (VAEs) have been successful at learning a low-dimensional manifold from high-dimensional data with complex dependencies. At their core, they consist of a powerful Bayesian probabilistic inference model, to capture the salient features of the data. In training, they exploit the power of variational inference, by optimizing a lower bound on the model evidence. The latent representation and the performance of VAEs are heavily influenced by the type of bound used as a cost function. Significant research work has been carried out into the development of tighter bounds than the original ELBO, to more accurately approximate the true log-likelihood. By leveraging the q-deformed logarithm in the traditional lower bounds, ELBO and IWAE, and the upper bound CUBO, we bring contributions to this direction of research. In this proof-of-concept study, we explore different ways of creating these q-deformed bounds that are tighter than the classical ones and we show improvements in the performance of such VAEs on the binarized MNIST dataset.

Cite

Text

Sârbu and Malagò. "Variational Autoencoders Trained with Q-Deformed Lower Bounds." ICLR 2019 Workshops: DeepGenStruct, 2019.

Markdown

[Sârbu and Malagò. "Variational Autoencoders Trained with Q-Deformed Lower Bounds." ICLR 2019 Workshops: DeepGenStruct, 2019.](https://mlanthology.org/iclrw/2019/sarbu2019iclrw-variational/)

BibTeX

@inproceedings{sarbu2019iclrw-variational,
  title     = {{Variational Autoencoders Trained with Q-Deformed Lower Bounds}},
  author    = {Sârbu, Septimia and Malagò, Luigi},
  booktitle = {ICLR 2019 Workshops: DeepGenStruct},
  year      = {2019},
  url       = {https://mlanthology.org/iclrw/2019/sarbu2019iclrw-variational/}
}