Green Generative Modeling: Recycling Dirty Data Using Recurrent Variational Autoencoders
Abstract
This paper explores two useful modifications of the recent variational autoencoder (VAE), a popular deep generative modeling framework that dresses traditional autoencoders with probabilistic attire. The first involves a specially-tailored form of conditioning that allows us to simplify the VAE decoder structure while simultaneously introducing robustness to outliers. In a related vein, a second, complementary alteration is proposed to further build invariance to contaminated or dirty samples via a data augmentation process that amounts to recycling. In brief, to the extent that the VAE is legitimately a representative generative model, then each output from the decoder should closely resemble an authentic sample, which can then be resubmitted as a novel input ad infinitum. Moreover, this can be accomplished via special recurrent connections without the need for additional parameters to be trained. We evaluate these proposals on multiple practical outlier-removal and generative modeling tasks, demonstrating considerable improvements over existing algorithms.
Cite
Text
Wang et al. "Green Generative Modeling: Recycling Dirty Data Using Recurrent Variational Autoencoders." Conference on Uncertainty in Artificial Intelligence, 2017.Markdown
[Wang et al. "Green Generative Modeling: Recycling Dirty Data Using Recurrent Variational Autoencoders." Conference on Uncertainty in Artificial Intelligence, 2017.](https://mlanthology.org/uai/2017/wang2017uai-green/)BibTeX
@inproceedings{wang2017uai-green,
title = {{Green Generative Modeling: Recycling Dirty Data Using Recurrent Variational Autoencoders}},
author = {Wang, Yu and Dai, Bin and Hua, Gang and Aston, John A. D. and Wipf, David P.},
booktitle = {Conference on Uncertainty in Artificial Intelligence},
year = {2017},
url = {https://mlanthology.org/uai/2017/wang2017uai-green/}
}