Interpretable Generalized Additive Models for Datasets with Missing Values

Abstract

Many important datasets contain samples that are missing one or more feature values. Maintaining the interpretability of machine learning models in the presence of such missing data is challenging. Singly or multiply imputing missing values complicates the model’s mapping from features to labels. On the other hand, reasoning on indicator variables that represent missingness introduces a potentially large number of additional terms, sacrificing sparsity. We solve these problems with M-GAM, a sparse, generalized, additive modeling approach that incorporates missingness indicators and their interaction terms while maintaining sparsity through $\ell_0$ regularization. We show that M-GAM provides similar or superior accuracy to prior methods while significantly improving sparsity relative to either imputation or naïve inclusion of indicator variables.

Cite

Text

McTavish et al. "Interpretable Generalized Additive Models for Datasets with Missing Values." Neural Information Processing Systems, 2024. doi:10.52202/079017-0380

Markdown

[McTavish et al. "Interpretable Generalized Additive Models for Datasets with Missing Values." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/mctavish2024neurips-interpretable/) doi:10.52202/079017-0380

BibTeX

@inproceedings{mctavish2024neurips-interpretable,
  title     = {{Interpretable Generalized Additive Models for Datasets with Missing Values}},
  author    = {McTavish, Hayden and Donnelly, Jon and Seltzer, Margo and Rudin, Cynthia},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-0380},
  url       = {https://mlanthology.org/neurips/2024/mctavish2024neurips-interpretable/}
}