Isolating Sources of Disentanglement in Variational Autoencoders

Abstract

We decompose the evidence lower bound to show the existence of a term measuring the total correlation between latent variables. We use this to motivate the beta-TCVAE (Total Correlation Variational Autoencoder) algorithm, a refinement and plug-in replacement of the beta-VAE for learning disentangled representations, requiring no additional hyperparameters during training. We further propose a principled classifier-free measure of disentanglement called the mutual information gap (MIG). We perform extensive quantitative and qualitative experiments, in both restricted and non-restricted settings, and show a strong relation between total correlation and disentanglement, when the model is trained using our framework.

Cite

Text

Chen et al. "Isolating Sources of Disentanglement in Variational Autoencoders." Neural Information Processing Systems, 2018.

Markdown

[Chen et al. "Isolating Sources of Disentanglement in Variational Autoencoders." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/chen2018neurips-isolating/)

BibTeX

@inproceedings{chen2018neurips-isolating,
  title     = {{Isolating Sources of Disentanglement in Variational Autoencoders}},
  author    = {Chen, Ricky T. Q. and Li, Xuechen and Grosse, Roger B and Duvenaud, David K.},
  booktitle = {Neural Information Processing Systems},
  year      = {2018},
  pages     = {2610-2620},
  url       = {https://mlanthology.org/neurips/2018/chen2018neurips-isolating/}
}