Discriminative Attribution from Paired Images

Abstract

We present a method for deep neural network interpretability by combining feature attribution with counterfactual explanations to generate attribution maps that highlight the most discriminative features between classes. Crucially, this method can be used to quantitatively evaluate the performance of feature attribution methods in an objective manner, thus preventing potential observer bias. We evaluate the proposed method on six diverse datasets, and use it to discover so far unknown morphological features of synapses in Drosophila melanogaster . We show quantitatively and qualitatively that the highlighted features are substantially more discriminative than those extracted using conventional attribution methods and improve upon similar approaches for counterfactual explainability. We argue that the extracted explanations are better suited for understanding fine grained class differences as learned by a deep neural network, in particular for image domains where humans have little to no visual priors, such as biomedical datasets.

Cite

Text

Eckstein et al. "Discriminative Attribution from Paired Images." European Conference on Computer Vision Workshops, 2022. doi:10.1007/978-3-031-25069-9_27

Markdown

[Eckstein et al. "Discriminative Attribution from Paired Images." European Conference on Computer Vision Workshops, 2022.](https://mlanthology.org/eccvw/2022/eckstein2022eccvw-discriminative/) doi:10.1007/978-3-031-25069-9_27

BibTeX

@inproceedings{eckstein2022eccvw-discriminative,
  title     = {{Discriminative Attribution from Paired Images}},
  author    = {Eckstein, Nils and Bukhari, Habib and Bates, Alexander S. and Jefferis, Gregory S. X. E. and Funke, Jan},
  booktitle = {European Conference on Computer Vision Workshops},
  year      = {2022},
  pages     = {406-422},
  doi       = {10.1007/978-3-031-25069-9_27},
  url       = {https://mlanthology.org/eccvw/2022/eckstein2022eccvw-discriminative/}
}