On Feynman-Kac Training of Partial Bayesian Neural Networks
Abstract
Recently, partial Bayesian neural networks (pBNNs), which only consider a subset of the parameters to be stochastic, were shown to perform competitively with full Bayesian neural networks. However, pBNNs are often multi-modal in the latent variable space and thus challenging to approximate with parametric models. To address this problem, we propose an efficient sampling-based training strategy, wherein the training of a pBNN is formulated as simulating a Feynman-Kac model. We then describe variations of sequential Monte Carlo samplers that allow us to simultaneously estimate the parameters and the latent posterior distribution of this model at a tractable computational cost. Using various synthetic and real-world datasets we show that our proposed training scheme outperforms the state of the art in terms of predictive performance.
Cite
Text
Zhao et al. "On Feynman-Kac Training of Partial Bayesian Neural Networks." Artificial Intelligence and Statistics, 2024.Markdown
[Zhao et al. "On Feynman-Kac Training of Partial Bayesian Neural Networks." Artificial Intelligence and Statistics, 2024.](https://mlanthology.org/aistats/2024/zhao2024aistats-feynmankac/)BibTeX
@inproceedings{zhao2024aistats-feynmankac,
title = {{On Feynman-Kac Training of Partial Bayesian Neural Networks}},
author = {Zhao, Zheng and Mair, Sebastian and Schön, Thomas B. and Sjölund, Jens},
booktitle = {Artificial Intelligence and Statistics},
year = {2024},
pages = {3223-3231},
volume = {238},
url = {https://mlanthology.org/aistats/2024/zhao2024aistats-feynmankac/}
}