High-Quality Prediction Intervals for Deep Learning: A Distribution-Free, Ensembled Approach

Abstract

This paper considers the generation of prediction intervals (PIs) by neural networks for quantifying uncertainty in regression tasks. It is axiomatic that high-quality PIs should be as narrow as possible, whilst capturing a specified portion of data. We derive a loss function directly from this axiom that requires no distributional assumption. We show how its form derives from a likelihood principle, that it can be used with gradient descent, and that model uncertainty is accounted for in ensembled form. Benchmark experiments show the method outperforms current state-of-the-art uncertainty quantification methods, reducing average PI width by over 10%.

Cite

Text

Pearce et al. "High-Quality Prediction Intervals for Deep Learning: A Distribution-Free, Ensembled Approach." International Conference on Machine Learning, 2018.

Markdown

[Pearce et al. "High-Quality Prediction Intervals for Deep Learning: A Distribution-Free, Ensembled Approach." International Conference on Machine Learning, 2018.](https://mlanthology.org/icml/2018/pearce2018icml-highquality/)

BibTeX

@inproceedings{pearce2018icml-highquality,
  title     = {{High-Quality Prediction Intervals for Deep Learning: A Distribution-Free, Ensembled Approach}},
  author    = {Pearce, Tim and Brintrup, Alexandra and Zaki, Mohamed and Neely, Andy},
  booktitle = {International Conference on Machine Learning},
  year      = {2018},
  pages     = {4075-4084},
  volume    = {80},
  url       = {https://mlanthology.org/icml/2018/pearce2018icml-highquality/}
}