Beyond Single-Feature Importance with ICECREAM
Abstract
Which set of features was responsible for a certain output of a machine learning model? Which components caused the failure of a cloud computing application? These are just two examples of questions we are addressing in this work by Identifying Coalition-based Explanations for Common and Rare Events in Any Model (ICECREAM). Specifically, we propose an information-theoretic quantitative measure for the influence of a coalition of variables on the distribution of a target variable. This allows us to identify which set of factors is essential to obtain a certain outcome, as opposed to well-established explainability and causal contribution analysis methods that rank individual factors. In experiments with synthetic and real-world data, we show that ICECREAM outperforms state-of-the-art methods for explainability and root cause analysis, and achieves impressive accuracy in both tasks.
Cite
Text
Oesterle et al. "Beyond Single-Feature Importance with ICECREAM." Proceedings of the Fourth Conference on Causal Learning and Reasoning, 2025.Markdown
[Oesterle et al. "Beyond Single-Feature Importance with ICECREAM." Proceedings of the Fourth Conference on Causal Learning and Reasoning, 2025.](https://mlanthology.org/clear/2025/oesterle2025clear-beyond/)BibTeX
@inproceedings{oesterle2025clear-beyond,
title = {{Beyond Single-Feature Importance with ICECREAM}},
author = {Oesterle, Michael and Blöbaum, Patrick and Mastakouri, Atalanti A. and Kirschbaum, Elke},
booktitle = {Proceedings of the Fourth Conference on Causal Learning and Reasoning},
year = {2025},
pages = {359-389},
volume = {275},
url = {https://mlanthology.org/clear/2025/oesterle2025clear-beyond/}
}