Revisiting Auxiliary Latent Variables in Generative Models

Abstract

Extending models with auxiliary latent variables is a well-known technique to in-crease model expressivity. Bachman & Precup (2015); Naesseth et al. (2018); Cremer et al. (2017); Domke & Sheldon (2018) show that Importance Weighted Autoencoders (IWAE) (Burda et al., 2015) can be viewed as extending the variational family with auxiliary latent variables. Similarly, we show that this view encompasses many of the recent developments in variational bounds (Maddisonet al., 2017; Naesseth et al., 2018; Le et al., 2017; Yin & Zhou, 2018; Molchanovet al., 2018; Sobolev & Vetrov, 2018). The success of enriching the variational family with auxiliary latent variables motivates applying the same techniques to the generative model. We develop a generative model analogous to the IWAE bound and empirically show that it outperforms the recently proposed Learned Accept/Reject Sampling algorithm (Bauer & Mnih, 2018), while being substantially easier to implement. Furthermore, we show that this generative process provides new insights on ranking Noise Contrastive Estimation (Jozefowicz et al.,2016; Ma & Collins, 2018) and Contrastive Predictive Coding (Oord et al., 2018).

Cite

Text

Lawson et al. "Revisiting Auxiliary Latent Variables in Generative Models." ICLR 2019 Workshops: DeepGenStruct, 2019.

Markdown

[Lawson et al. "Revisiting Auxiliary Latent Variables in Generative Models." ICLR 2019 Workshops: DeepGenStruct, 2019.](https://mlanthology.org/iclrw/2019/lawson2019iclrw-revisiting/)

BibTeX

@inproceedings{lawson2019iclrw-revisiting,
  title     = {{Revisiting Auxiliary Latent Variables in Generative Models}},
  author    = {Lawson, Dieterich and Tucker, George and Dai, Bo and Ranganath, Rajesh},
  booktitle = {ICLR 2019 Workshops: DeepGenStruct},
  year      = {2019},
  url       = {https://mlanthology.org/iclrw/2019/lawson2019iclrw-revisiting/}
}