Variational Autoencoder with Arbitrary Conditioning

Abstract

We propose a single neural probabilistic model based on variational autoencoder that can be conditioned on an arbitrary subset of observed features and then sample the remaining features in "one shot". The features may be both real-valued and categorical. Training of the model is performed by stochastic variational Bayes. The experimental evaluation on synthetic data, as well as feature imputation and image inpainting problems, shows the effectiveness of the proposed approach and diversity of the generated samples.

Cite

Text

Ivanov et al. "Variational Autoencoder with Arbitrary Conditioning." International Conference on Learning Representations, 2019.

Markdown

[Ivanov et al. "Variational Autoencoder with Arbitrary Conditioning." International Conference on Learning Representations, 2019.](https://mlanthology.org/iclr/2019/ivanov2019iclr-variational/)

BibTeX

@inproceedings{ivanov2019iclr-variational,
  title     = {{Variational Autoencoder with Arbitrary Conditioning}},
  author    = {Ivanov, Oleg and Figurnov, Michael and Vetrov, Dmitry},
  booktitle = {International Conference on Learning Representations},
  year      = {2019},
  url       = {https://mlanthology.org/iclr/2019/ivanov2019iclr-variational/}
}