Expected Pinball Loss for Quantile Regression and Inverse CDF Estimation

Abstract

We analyze and improve a recent strategy to train a quantile regression model by minimizing an expected pinball loss over all quantiles. Through an asymptotic convergence analysis, we show that minimizing the expected pinball loss can be more efficient at estimating single quantiles than training with the standard pinball loss for that quantile, an insight that generalizes the known deficiencies of the sample quantile in the unconditioned setting. Then, to guarantee a legitimate inverse CDF, we propose using flexible deep lattice networks with a monotonicity constraint on the quantile input to guarantee non-crossing quantiles, and show lattice models can be regularized to the same location-scale family. Our analysis and experiments on simulated and real datasets show that the proposed method produces state-of-the-art legitimate inverse CDF estimates that are likely to be as good or better for specific target quantiles.

Cite

Text

Narayan et al. "Expected Pinball Loss for Quantile Regression and Inverse CDF Estimation." Transactions on Machine Learning Research, 2024.

Markdown

[Narayan et al. "Expected Pinball Loss for Quantile Regression and Inverse CDF Estimation." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/narayan2024tmlr-expected/)

BibTeX

@article{narayan2024tmlr-expected,
  title     = {{Expected Pinball Loss for Quantile Regression and Inverse CDF Estimation}},
  author    = {Narayan, Taman and Wang, Serena Lutong and Canini, Kevin Robert and Gupta, Maya},
  journal   = {Transactions on Machine Learning Research},
  year      = {2024},
  url       = {https://mlanthology.org/tmlr/2024/narayan2024tmlr-expected/}
}