Ensembles for Uncertainty Estimation: Benefits of Prior Functions and Bootstrapping

Abstract

In machine learning, an agent needs to estimate uncertainty to efficiently explore and adapt and to make effective decisions. A common approach to uncertainty estimation maintains an ensemble of models. In recent years, several approaches have been proposed for training ensembles, and conflicting views prevail with regards to the importance of various ingredients of these approaches. In this paper, we aim to address the benefits of two ingredients -- prior functions and bootstrapping -- which have come into question. We show that prior functions can significantly improve an ensemble agent's joint predictions across inputs and that bootstrapping affords additional benefits if the signal-to-noise ratio varies across inputs. Our claims are justified by both theoretical and experimental results.

Cite

Text

Dwaracherla et al. "Ensembles for Uncertainty Estimation: Benefits of Prior Functions and Bootstrapping." Transactions on Machine Learning Research, 2023.

Markdown

[Dwaracherla et al. "Ensembles for Uncertainty Estimation: Benefits of Prior Functions and Bootstrapping." Transactions on Machine Learning Research, 2023.](https://mlanthology.org/tmlr/2023/dwaracherla2023tmlr-ensembles/)

BibTeX

@article{dwaracherla2023tmlr-ensembles,
  title     = {{Ensembles for Uncertainty Estimation: Benefits of Prior Functions and Bootstrapping}},
  author    = {Dwaracherla, Vikranth and Wen, Zheng and Osband, Ian and Lu, Xiuyuan and Asghari, Seyed Mohammad and Van Roy, Benjamin},
  journal   = {Transactions on Machine Learning Research},
  year      = {2023},
  url       = {https://mlanthology.org/tmlr/2023/dwaracherla2023tmlr-ensembles/}
}