Conditional Image Generation by Conditioning Variational Auto-Encoders
Abstract
We present a conditional variational auto-encoder (VAE) which, to avoid the substantial cost of training from scratch, uses an architecture and training objective capable of leveraging a foundation model in the form of a pretrained unconditional VAE. To train the conditional VAE, we only need to train an artifact to perform amortized inference over the unconditional VAE's latent variables given a conditioning input. We demonstrate our approach on tasks including image inpainting, for which it outperforms state-of-the-art GAN-based approaches at faithfully representing the inherent uncertainty. We conclude by describing a possible application of our inpainting model, in which it is used to perform Bayesian experimental design for the purpose of guiding a sensor.
Cite
Text
Harvey et al. "Conditional Image Generation by Conditioning Variational Auto-Encoders." International Conference on Learning Representations, 2022.Markdown
[Harvey et al. "Conditional Image Generation by Conditioning Variational Auto-Encoders." International Conference on Learning Representations, 2022.](https://mlanthology.org/iclr/2022/harvey2022iclr-conditional/)BibTeX
@inproceedings{harvey2022iclr-conditional,
title = {{Conditional Image Generation by Conditioning Variational Auto-Encoders}},
author = {Harvey, William and Naderiparizi, Saeid and Wood, Frank},
booktitle = {International Conference on Learning Representations},
year = {2022},
url = {https://mlanthology.org/iclr/2022/harvey2022iclr-conditional/}
}