Interpretable Counterfactual Explanations Guided by Prototypes

Abstract

We propose a fast, model agnostic method for finding interpretable counterfactual explanations of classifier predictions by using class prototypes. We show that class prototypes, obtained using either an encoder or through class specific k-d trees, significantly speed up the the search for counterfactual instances and result in more interpretable explanations. We introduce two novel metrics to quantitatively evaluate local interpretability at the instance level. We use these metrics to illustrate the effectiveness of our method on an image and tabular dataset, respectively MNIST and Breast Cancer Wisconsin (Diagnostic). The method also eliminates the computational bottleneck that arises because of numerical gradient evaluation for $\textit{black box}$ models.

Cite

Text

Van Looveren and Klaise. "Interpretable Counterfactual Explanations Guided by Prototypes." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2021. doi:10.1007/978-3-030-86520-7_40

Markdown

[Van Looveren and Klaise. "Interpretable Counterfactual Explanations Guided by Prototypes." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2021.](https://mlanthology.org/ecmlpkdd/2021/looveren2021ecmlpkdd-interpretable/) doi:10.1007/978-3-030-86520-7_40

BibTeX

@inproceedings{looveren2021ecmlpkdd-interpretable,
  title     = {{Interpretable Counterfactual Explanations Guided by Prototypes}},
  author    = {Van Looveren, Arnaud and Klaise, Janis},
  booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
  year      = {2021},
  pages     = {650-665},
  doi       = {10.1007/978-3-030-86520-7_40},
  url       = {https://mlanthology.org/ecmlpkdd/2021/looveren2021ecmlpkdd-interpretable/}
}