Computing Abductive Explanations for Boosted Trees

Abstract

Boosted trees is a dominant ML model, exhibiting high accuracy. However, boosted trees are hardly intelligible, and this is a problem whenever they are used in safety-critical applications. Indeed, in such a context, provably sound explanations for the predictions made are expected. Recent work have shown how subset-minimal abductive explanations can be derived for boosted trees, using automated reasoning techniques. However, the generation of such well-founded explanations is intractable in the general case. To improve the scalability of their generation, we introduce the notion of tree-specific explanation for a boosted tree. We show that tree-specific explanations are provably sound abductive explanations that can be computed in polynomial time. We also explain how to derive a subset-minimal abductive explanation from a tree-specific explanation. Experiments on various datasets show the computational benefits of leveraging tree-specific explanations for deriving subset-minimal abductive explanations.

Cite

Text

Audemard et al. "Computing Abductive Explanations for Boosted Trees." Artificial Intelligence and Statistics, 2023.

Markdown

[Audemard et al. "Computing Abductive Explanations for Boosted Trees." Artificial Intelligence and Statistics, 2023.](https://mlanthology.org/aistats/2023/audemard2023aistats-computing/)

BibTeX

@inproceedings{audemard2023aistats-computing,
  title     = {{Computing Abductive Explanations for Boosted Trees}},
  author    = {Audemard, Gilles and Lagniez, Jean-Marie and Marquis, Pierre and Szczepanski, Nicolas},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2023},
  pages     = {4699-4711},
  volume    = {206},
  url       = {https://mlanthology.org/aistats/2023/audemard2023aistats-computing/}
}