Counterfactual Explanation Trees: Transparent and Consistent Actionable Recourse with Decision Trees

Abstract

Counterfactual Explanation (CE) is a post-hoc explanation method that provides a perturbation for altering the prediction result of a classifier. An individual can interpret the perturbation as an "action" to obtain the desired decision results. Existing CE methods focus on providing an action, which is optimized for a given single instance. However, these CE methods do not address the case where we have to assign actions to multiple instances simultaneously. In such a case, we need a framework of CE that assigns actions to multiple instances in a transparent and consistent way. In this study, we propose Counterfactual Explanation Tree (CET) that assigns effective actions with decision trees. Due to the properties of decision trees, our CET has two advantages: (1) Transparency: the reasons for assigning actions are summarized in an interpretable structure, and (2) Consistency: these reasons do not conflict with each other. We learn a CET in two steps: (i) compute one effective action for multiple instances and (ii) partition the instances to balance the effectiveness and interpretability. Numerical experiments and user studies demonstrated the efficacy of our CET in comparison with existing methods.

Cite

Text

Kanamori et al. " Counterfactual Explanation Trees: Transparent and Consistent Actionable Recourse with Decision Trees ." Artificial Intelligence and Statistics, 2022.

Markdown

[Kanamori et al. " Counterfactual Explanation Trees: Transparent and Consistent Actionable Recourse with Decision Trees ." Artificial Intelligence and Statistics, 2022.](https://mlanthology.org/aistats/2022/kanamori2022aistats-counterfactual/)

BibTeX

@inproceedings{kanamori2022aistats-counterfactual,
  title     = {{ Counterfactual Explanation Trees: Transparent and Consistent Actionable Recourse with Decision Trees }},
  author    = {Kanamori, Kentaro and Takagi, Takuya and Kobayashi, Ken and Ike, Yuichi},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2022},
  pages     = {1846-1870},
  volume    = {151},
  url       = {https://mlanthology.org/aistats/2022/kanamori2022aistats-counterfactual/}
}