Risk Bounds for Distributional Regression
Abstract
This work examines risk bounds for nonparametric distributional regression estimators. For convex-constrained distributional regression, general upper bounds are established for the continuous ranked probability score (CRPS) and the worst-case mean squared error (MSE) across the domain. These theoretical results are applied to isotonic and trend filtering distributional regression, yielding convergence rates consistent with those for mean estimation. Furthermore, a general upper bound is derived for distributional regression under non-convex constraints, with a specific application to neural network-based estimators. Comprehensive experiments on both simulated and real data validate the theoretical contributions, demonstrating their practical effectiveness.
Cite
Text
Padilla et al. "Risk Bounds for Distributional Regression." Advances in Neural Information Processing Systems, 2025.Markdown
[Padilla et al. "Risk Bounds for Distributional Regression." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/padilla2025neurips-risk/)BibTeX
@inproceedings{padilla2025neurips-risk,
title = {{Risk Bounds for Distributional Regression}},
author = {Padilla, Carlos Misael Madrid and Padilla, Oscar Hernan Madrid and Chatterjee, Sabyasachi},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/padilla2025neurips-risk/}
}