WiSE-ALE: Wide Sample Estimator for Aggregate Latent Embedding

Abstract

In this paper, we present a new generative model for learning latent embeddings. Compared to the classical generative process, where each observed data point is generated from an individual latent variable, our approach assumes a global latent variable to generate the whole set of observed data points. We then propose a learning objective that is derived as an approximation to a lower bound to the data log likelihood, leading to our algorithm, WiSE-ALE. Compared to the standard ELBO objective, where the variational posterior for each data point is encouraged to match the prior distribution, the WiSE-ALE objective matches the averaged posterior, over all samples, with the prior, allowing the sample-wise posterior distributions to have a wider range of acceptable embedding mean and variance and leading to better reconstruction quality in the auto-encoding process. Through various examples and comparison to other state-of-the-art VAE models, we demonstrate that WiSE-ALE has excellent information embedding properties, whilst still retaining the ability to learn a smooth, compact representation.

Cite

Text

Lin et al. "WiSE-ALE: Wide Sample Estimator for Aggregate Latent Embedding." ICLR 2019 Workshops: DeepGenStruct, 2019.

Markdown

[Lin et al. "WiSE-ALE: Wide Sample Estimator for Aggregate Latent Embedding." ICLR 2019 Workshops: DeepGenStruct, 2019.](https://mlanthology.org/iclrw/2019/lin2019iclrw-wiseale/)

BibTeX

@inproceedings{lin2019iclrw-wiseale,
  title     = {{WiSE-ALE: Wide Sample Estimator for Aggregate Latent Embedding}},
  author    = {Lin, Shuyu and Clark, Ronald and Birke, Robert and Trigoni, Niki and Roberts, Stephen},
  booktitle = {ICLR 2019 Workshops: DeepGenStruct},
  year      = {2019},
  url       = {https://mlanthology.org/iclrw/2019/lin2019iclrw-wiseale/}
}