Explaining with Trees: Interpreting CNNs Using Hierarchies
Abstract
Challenges remain in providing interpretable explanations for neural network decision-making in explainable AI (xAI). Existing methods like Integrated Gradients produce noisy maps, and LIME, while intuitive, may deviate from the model’s internal logic. We introduce a framework that uses hierarchical segmentation techniques for faithful and interpretable explanations of Convolutional Neural Networks (CNNs). Our method constructs model-based hierarchical segmentations that maintain fidelity to the model’s decision-making process and allow both human-centric and model-centric segmentation. This approach can be combined with various xAI methods and provides multiscale explanations that help identify biases and improve understanding of neural network predictive behavior. Experiments show that our framework, xAiTrees, delivers highly interpretable and faithful model explanations, not only surpassing traditional xAI methods but shedding new light on a novel approach to enhancing xAI interpretability.
Cite
Text
Rodrigues et al. "Explaining with Trees: Interpreting CNNs Using Hierarchies." Transactions on Machine Learning Research, 2026.Markdown
[Rodrigues et al. "Explaining with Trees: Interpreting CNNs Using Hierarchies." Transactions on Machine Learning Research, 2026.](https://mlanthology.org/tmlr/2026/rodrigues2026tmlr-explaining/)BibTeX
@article{rodrigues2026tmlr-explaining,
title = {{Explaining with Trees: Interpreting CNNs Using Hierarchies}},
author = {Rodrigues, Caroline Mazini and Boutry, Nicolas and Najman, Laurent},
journal = {Transactions on Machine Learning Research},
year = {2026},
url = {https://mlanthology.org/tmlr/2026/rodrigues2026tmlr-explaining/}
}