Efficient Priors for Scalable Variational Inference in Bayesian Deep Neural Networks
Abstract
Stochastic variational inference for Bayesian deep neural networks (DNNs) requires specifying priors and approximate posterior distributions for neural network weights. Specifying meaningful weight priors is a challenging problem, particularly for scaling variational inference to deeper architectures involving high dimensional weight space. Based on empirical Bayes approach, we propose Bayesian MOdel Priors Extracted from Deterministic DNN (MOPED) method to choose meaningful prior distributions over weight space using deterministic weights derived from the pretrained DNNs of equivalent architecture. We empirically evaluate the proposed approach on real-world applications including image classification, video activity recognition and audio classification tasks with varying complex neural network architectures. The proposed method enables scalable variational inference with faster training convergence and provides reliable uncertainty quantification.
Cite
Text
Krishnan et al. "Efficient Priors for Scalable Variational Inference in Bayesian Deep Neural Networks." IEEE/CVF International Conference on Computer Vision Workshops, 2019. doi:10.1109/ICCVW.2019.00102Markdown
[Krishnan et al. "Efficient Priors for Scalable Variational Inference in Bayesian Deep Neural Networks." IEEE/CVF International Conference on Computer Vision Workshops, 2019.](https://mlanthology.org/iccvw/2019/krishnan2019iccvw-efficient/) doi:10.1109/ICCVW.2019.00102BibTeX
@inproceedings{krishnan2019iccvw-efficient,
title = {{Efficient Priors for Scalable Variational Inference in Bayesian Deep Neural Networks}},
author = {Krishnan, Ranganath and Subedar, Mahesh and Tickoo, Omesh},
booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
year = {2019},
pages = {773-777},
doi = {10.1109/ICCVW.2019.00102},
url = {https://mlanthology.org/iccvw/2019/krishnan2019iccvw-efficient/}
}