ProtoArgNet: Interpretable Image Classification with Super-Prototypes and Argumentation
Abstract
We propose ProtoArgNet, a novel interpretable deep neural architecture for image classification in the spirit of prototypical-part-learning as found, e.g., in ProtoPNet. While earlier approaches associate every class with multiple prototypical-parts, ProtoArgNet uses super-prototypes that combine prototypical-parts into a unified class representation. This is done by combining local activations of prototypes in an MLP-like manner, enabling the localization of prototypes and learning (non-linear) spatial relationships among them. By leveraging a form of argumentation, ProtoArgNet is capable of providing both supporting (i.e. `this looks like that') and attacking (i.e. `this differs from that') explanations. We demonstrate on several datasets that ProtoArgNet outperforms state-of-the-art prototypical-part-learning approaches. Moreover, the argumentation component in ProtoArgNet is customisable to the user's cognitive requirements by a process of sparsification, which leads to more compact explanations compared to state-of-the-art approaches.
Cite
Text
Ayoobi et al. "ProtoArgNet: Interpretable Image Classification with Super-Prototypes and Argumentation." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I2.32173Markdown
[Ayoobi et al. "ProtoArgNet: Interpretable Image Classification with Super-Prototypes and Argumentation." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/ayoobi2025aaai-protoargnet/) doi:10.1609/AAAI.V39I2.32173BibTeX
@inproceedings{ayoobi2025aaai-protoargnet,
title = {{ProtoArgNet: Interpretable Image Classification with Super-Prototypes and Argumentation}},
author = {Ayoobi, Hamed and Potyka, Nico and Toni, Francesca},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2025},
pages = {1791-1799},
doi = {10.1609/AAAI.V39I2.32173},
url = {https://mlanthology.org/aaai/2025/ayoobi2025aaai-protoargnet/}
}