NODE-GAMLSS: Interpretable Uncertainty Modelling via Deep Distributional Regression

Abstract

We propose NODE-GAMLSS, a framework for scalable uncertainty modelling through deep distributional regression. NODE-GAMLSS is an interpretable attention based deep learning architecture which models the location, scale, and shape (LSS) dependent on the data instead of only the conditional mean enabling us to predict quantiles and interpret the feature effects. We perform a benchmark comparison based on simulated and real datasets with state-of-the-art interpretable distributional regression models, demonstrating the superior quantile estimation, accuracy and interpretability.

Cite

Text

De et al. "NODE-GAMLSS: Interpretable Uncertainty Modelling via Deep Distributional Regression." NeurIPS 2024 Workshops: BDU, 2024.

Markdown

[De et al. "NODE-GAMLSS: Interpretable Uncertainty Modelling via Deep Distributional Regression." NeurIPS 2024 Workshops: BDU, 2024.](https://mlanthology.org/neuripsw/2024/de2024neuripsw-nodegamlss/)

BibTeX

@inproceedings{de2024neuripsw-nodegamlss,
  title     = {{NODE-GAMLSS: Interpretable Uncertainty Modelling via Deep Distributional Regression}},
  author    = {De, Ananyapam and Thielmann, Anton Frederik and Säfken, Benjamin},
  booktitle = {NeurIPS 2024 Workshops: BDU},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/de2024neuripsw-nodegamlss/}
}