A Decoder Suffices for Query-Adaptive Variational Inference

Abstract

Deep generative models like variational autoencoders (VAEs) are widely used for density estimation and dimensionality reduction, but infer latent representations via amortized inference algorithms, which require that all data dimensions are observed. VAEs thus lack a key strength of probabilistic graphical models: the ability to infer posteriors for test queries with arbitrary structure. We demonstrate that many prior methods for imputation with VAEs are costly and ineffective, and achieve superior performance via query-adaptive variational inference (QAVI) algorithms based directly on the generative decoder. By analytically marginalizing arbitrary sets of missing features, and optimizing expressive posteriors including mixtures and density flows, our non-amortized QAVI algorithms achieve excellent performance while avoiding expensive model retraining. On standard image and tabular datasets, our approach substantially outperforms prior methods in the plausibility and diversity of imputations. We also show that QAVI effectively generalizes to recent hierarchical VAE models for high-dimensional images.

Cite

Text

Agarwal et al. "A Decoder Suffices for Query-Adaptive Variational Inference." Uncertainty in Artificial Intelligence, 2023.

Markdown

[Agarwal et al. "A Decoder Suffices for Query-Adaptive Variational Inference." Uncertainty in Artificial Intelligence, 2023.](https://mlanthology.org/uai/2023/agarwal2023uai-decoder/)

BibTeX

@inproceedings{agarwal2023uai-decoder,
  title     = {{A Decoder Suffices for Query-Adaptive Variational Inference}},
  author    = {Agarwal, Sakshi and Hope, Gabriel and Younis, Ali and Sudderth, Erik B.},
  booktitle = {Uncertainty in Artificial Intelligence},
  year      = {2023},
  pages     = {33-44},
  volume    = {216},
  url       = {https://mlanthology.org/uai/2023/agarwal2023uai-decoder/}
}