Incorporating Unlabelled Data into Bayesian Neural Networks

Abstract

Conventional Bayesian Neural Networks (BNNs) are unable to leverage unlabelled data to improve their predictions. To overcome this limitation, we introduce Self-Supervised Bayesian Neural Networks, which use unlabelled data to learn models with suitable prior predictive distributions. This is achieved by leveraging contrastive pretraining techniques and optimising a variational lower bound. We then show that the prior predictive distributions of self-supervised BNNs capture problem semantics better than conventional BNN priors. In turn, our approach offers improved predictive performance over conventional BNNs, especially in low-budget regimes.

Cite

Text

Sharma et al. "Incorporating Unlabelled Data into Bayesian Neural Networks." Transactions on Machine Learning Research, 2024.

Markdown

[Sharma et al. "Incorporating Unlabelled Data into Bayesian Neural Networks." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/sharma2024tmlr-incorporating/)

BibTeX

@article{sharma2024tmlr-incorporating,
  title     = {{Incorporating Unlabelled Data into Bayesian Neural Networks}},
  author    = {Sharma, Mrinank and Rainforth, Tom and Teh, Yee Whye and Fortuin, Vincent},
  journal   = {Transactions on Machine Learning Research},
  year      = {2024},
  url       = {https://mlanthology.org/tmlr/2024/sharma2024tmlr-incorporating/}
}