Modeling Assumptions and Evaluation Schemes: On the Assessment of Deep Latent Variable Models
Abstract
Recent findings indicate that deep generative models can assign unreasonably high likelihoods to out-of-distribution data points. Especially in applications such as autonomous driving, medicine and robotics, these overconfident ratings can have detrimental effects. In this work, we argue that two points contribute to these findings: 1) modeling assumptions such as the choice of the likelihood, and 2) the evaluation under local posterior distributions vs global prior distributions. We demonstrate experimentally how these mechanisms can bias the likelihood estimates of variational autoencoders.
Cite
Text
Bütepage et al. "Modeling Assumptions and Evaluation Schemes: On the Assessment of Deep Latent Variable Models." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019.Markdown
[Bütepage et al. "Modeling Assumptions and Evaluation Schemes: On the Assessment of Deep Latent Variable Models." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019.](https://mlanthology.org/cvprw/2019/butepage2019cvprw-modeling/)BibTeX
@inproceedings{butepage2019cvprw-modeling,
title = {{Modeling Assumptions and Evaluation Schemes: On the Assessment of Deep Latent Variable Models}},
author = {Bütepage, Judith and Poklukar, Petra and Kragic, Danica},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2019},
pages = {9-12},
url = {https://mlanthology.org/cvprw/2019/butepage2019cvprw-modeling/}
}