Likelihood-Ratio-Based Confidence Intervals for Neural Networks
Abstract
This paper introduces a first implementation of a novel likelihood-ratio-based approach for constructing confidence intervals for neural networks. Our method, called DeepLR, offers several qualitative advantages: most notably, the ability to construct asymmetric intervals that expand in regions with a limited amount of data, and the inherent incorporation of factors such as the amount of training time, network architecture, and regularization techniques. While acknowledging that the current implementation of the method is prohibitively expensive for many deep-learning applications, the high cost may already be justified in specific fields like medical predictions or astrophysics, where a reliable uncertainty estimate for a single prediction is essential. This work highlights the significant potential of a likelihood-ratio-based uncertainty estimate and establishes a promising avenue for future research.
Cite
Text
Sluijterman et al. "Likelihood-Ratio-Based Confidence Intervals for Neural Networks." Machine Learning, 2025. doi:10.1007/S10994-024-06639-3Markdown
[Sluijterman et al. "Likelihood-Ratio-Based Confidence Intervals for Neural Networks." Machine Learning, 2025.](https://mlanthology.org/mlj/2025/sluijterman2025mlj-likelihoodratiobased/) doi:10.1007/S10994-024-06639-3BibTeX
@article{sluijterman2025mlj-likelihoodratiobased,
title = {{Likelihood-Ratio-Based Confidence Intervals for Neural Networks}},
author = {Sluijterman, Laurens and Cator, Eric and Heskes, Tom},
journal = {Machine Learning},
year = {2025},
pages = {116},
doi = {10.1007/S10994-024-06639-3},
volume = {114},
url = {https://mlanthology.org/mlj/2025/sluijterman2025mlj-likelihoodratiobased/}
}