Explanations of Black-Box Models Based on Directional Feature Interactions

Abstract

As machine learning algorithms are deployed ubiquitously to a variety of domains, it is imperative to make these often black-box models transparent. Several recent works explain black-box models by capturing the most influential features for prediction per instance; such explanation methods are univariate, as they characterize importance per feature. We extend univariate explanation to a higher-order; this enhances explainability, as bivariate methods can capture feature interactions in black-box models, represented as a directed graph. Analyzing this graph enables us to discover groups of features that are equally important (i.e., interchangeable), while the notion of directionality allows us to identify the most influential features. We apply our bivariate method on Shapley value explanations, and experimentally demonstrate the ability of directional explanations to discover feature interactions. We show the superiority of our method against state-of-the-art on CIFAR10, IMDB, Census, Divorce, Drug, and gene data.

Cite

Text

Masoomi et al. "Explanations of Black-Box Models Based on Directional Feature Interactions." International Conference on Learning Representations, 2022.

Markdown

[Masoomi et al. "Explanations of Black-Box Models Based on Directional Feature Interactions." International Conference on Learning Representations, 2022.](https://mlanthology.org/iclr/2022/masoomi2022iclr-explanations/)

BibTeX

@inproceedings{masoomi2022iclr-explanations,
  title     = {{Explanations of Black-Box Models Based on Directional Feature Interactions}},
  author    = {Masoomi, Aria and Hill, Davin and Xu, Zhonghui and Hersh, Craig P and Silverman, Edwin K. and Castaldi, Peter J. and Ioannidis, Stratis and Dy, Jennifer},
  booktitle = {International Conference on Learning Representations},
  year      = {2022},
  url       = {https://mlanthology.org/iclr/2022/masoomi2022iclr-explanations/}
}