Posterior Collapse of a Linear Latent Variable Model
Abstract
This work identifies the existence and cause of a type of posterior collapse that frequently occurs in the Bayesian deep learning practice. For a general linear latent variable model that includes linear variational autoencoders as a special case, we precisely identify the nature of posterior collapse to be the competition between the likelihood and the regularization of the mean due to the prior. Our result also suggests that posterior collapse may be a general problem of learning for deeper architectures and deepens our understanding of Bayesian deep learning.
Cite
Text
Wang and Ziyin. "Posterior Collapse of a Linear Latent Variable Model." Neural Information Processing Systems, 2022.Markdown
[Wang and Ziyin. "Posterior Collapse of a Linear Latent Variable Model." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/wang2022neurips-posterior/)BibTeX
@inproceedings{wang2022neurips-posterior,
title = {{Posterior Collapse of a Linear Latent Variable Model}},
author = {Wang, Zihao and Ziyin, Liu},
booktitle = {Neural Information Processing Systems},
year = {2022},
url = {https://mlanthology.org/neurips/2022/wang2022neurips-posterior/}
}