Supervising Variational Autoencoder Latent Representations with Language

Abstract

Supervising latent representations of data is of great interest for modern multi-modal generative machine learning. In this work, we propose two new methods to use text to condition the latent representations of a VAE, and evaluate them on a novel conditional image-generation benchmark task. We find that the applied methods can be used to generate highly accurate reconstructed images through language querying with minimal compute resources. Our methods are quantitatively successful at conforming to textually-supervised attributes of an image while keeping unsupervised attributes constant. At large, we present critical observations on disentanglement between supervised and unsupervised properties of images and identify common barriers to effective disentanglement.

Cite

Text

Lu et al. "Supervising Variational Autoencoder Latent Representations with Language." NeurIPS 2023 Workshops: UniReps, 2023.

Markdown

[Lu et al. "Supervising Variational Autoencoder Latent Representations with Language." NeurIPS 2023 Workshops: UniReps, 2023.](https://mlanthology.org/neuripsw/2023/lu2023neuripsw-supervising/)

BibTeX

@inproceedings{lu2023neuripsw-supervising,
  title     = {{Supervising Variational Autoencoder Latent Representations with Language}},
  author    = {Lu, Thomas and Marathe, Aboli and Martin, Ada},
  booktitle = {NeurIPS 2023 Workshops: UniReps},
  year      = {2023},
  url       = {https://mlanthology.org/neuripsw/2023/lu2023neuripsw-supervising/}
}