On Tackling Explanation Redundancy in Decision Trees (Extended Abstract)
Abstract
Claims about the interpretability of decision trees can be traced back to the origins of machine learning (ML). Indeed, given some input consistent with a decision tree's path, the explanation for the resulting prediction consists of the features in that path. Moreover, a growing number of works propose the use of decision trees, and of other so-called interpretable models, as a possible solution for deploying ML models in high-risk applications. This paper overviews recent theoretical and practical results which demonstrate that for most decision trees, tree paths exhibit so-called explanation redundancy, in that logically sound explanations can often be significantly more succinct than what the features in the path dictates. More importantly, such decision tree explanations can be computed in polynomial-time, and so can be produced with essentially no effort other than traversing the decision tree. The experimental results, obtained on a large range of publicly available decision trees, support the paper's claims.
Cite
Text
Izza et al. "On Tackling Explanation Redundancy in Decision Trees (Extended Abstract)." International Joint Conference on Artificial Intelligence, 2023. doi:10.24963/IJCAI.2023/779Markdown
[Izza et al. "On Tackling Explanation Redundancy in Decision Trees (Extended Abstract)." International Joint Conference on Artificial Intelligence, 2023.](https://mlanthology.org/ijcai/2023/izza2023ijcai-tackling/) doi:10.24963/IJCAI.2023/779BibTeX
@inproceedings{izza2023ijcai-tackling,
title = {{On Tackling Explanation Redundancy in Decision Trees (Extended Abstract)}},
author = {Izza, Yacine and Ignatiev, Alexey and Marques-Silva, João},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2023},
pages = {6900-6904},
doi = {10.24963/IJCAI.2023/779},
url = {https://mlanthology.org/ijcai/2023/izza2023ijcai-tackling/}
}