Learning Prediction Intervals for Regression: Generalization and Calibration
Abstract
We study the generation of prediction intervals in regression for uncertainty quantification. This task can be formalized as an empirical constrained optimization problem that minimizes the average interval width while maintaining the coverage accuracy across data. We strengthen the existing literature by studying two aspects of this empirical optimization. First is a general learning theory to characterize the optimality-feasibility tradeoff that encompasses Lipschitz continuity and VC-subgraph classes, which are exemplified in regression trees and neural networks. Second is a calibration machinery and the corresponding statistical theory to optimally select the regularization parameter that manages this tradeoff, which bypasses the overfitting issues in previous approaches in coverage attainment. We empirically demonstrate the strengths of our interval generation and calibration algorithms in terms of testing performances compared to existing benchmarks.
Cite
Text
Chen et al. " Learning Prediction Intervals for Regression: Generalization and Calibration ." Artificial Intelligence and Statistics, 2021.Markdown
[Chen et al. " Learning Prediction Intervals for Regression: Generalization and Calibration ." Artificial Intelligence and Statistics, 2021.](https://mlanthology.org/aistats/2021/chen2021aistats-learning/)BibTeX
@inproceedings{chen2021aistats-learning,
title = {{ Learning Prediction Intervals for Regression: Generalization and Calibration }},
author = {Chen, Haoxian and Huang, Ziyi and Lam, Henry and Qian, Huajie and Zhang, Haofeng},
booktitle = {Artificial Intelligence and Statistics},
year = {2021},
pages = {820-828},
volume = {130},
url = {https://mlanthology.org/aistats/2021/chen2021aistats-learning/}
}