Interpretability Benchmark for Evaluating Spatial Misalignment of Prototypical Parts Explanations

Abstract

Prototypical parts-based networks are becoming increasingly popular due to their faithful self-explanations. However, their similarity maps are calculated in the penultimate network layer. Therefore, the receptive field of the prototype activation region often depends on parts of the image outside this region, which can lead to misleading interpretations. We name this undesired behavior a spatial explanation misalignment and introduce an interpretability benchmark with a set of dedicated metrics for quantifying this phenomenon. In addition, we propose a method for misalignment compensation and apply it to existing state-of-the-art models. We show the expressiveness of our benchmark and the effectiveness of the proposed compensation methodology through extensive empirical studies.

Cite

Text

Sacha et al. "Interpretability Benchmark for Evaluating Spatial Misalignment of Prototypical Parts Explanations." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I19.30154

Markdown

[Sacha et al. "Interpretability Benchmark for Evaluating Spatial Misalignment of Prototypical Parts Explanations." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/sacha2024aaai-interpretability/) doi:10.1609/AAAI.V38I19.30154

BibTeX

@inproceedings{sacha2024aaai-interpretability,
  title     = {{Interpretability Benchmark for Evaluating Spatial Misalignment of Prototypical Parts Explanations}},
  author    = {Sacha, Mikolaj and Jura, Bartosz and Rymarczyk, Dawid and Struski, Lukasz and Tabor, Jacek and Zielinski, Bartosz},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2024},
  pages     = {21563-21573},
  doi       = {10.1609/AAAI.V38I19.30154},
  url       = {https://mlanthology.org/aaai/2024/sacha2024aaai-interpretability/}
}