Take 5: Interpretable Image Classification with a Handful of Features
Abstract
Deep Neural Networks use thousands of mostly incomprehensible features to identify a single class, a decision no human can follow. We propose an interpretable sparse and low dimensional final decision layer in a deep neural network with measurable aspects of interpretability and demonstrate it on fine-grained image classification. We argue that a human can only understand the decision of a machine learning model, if the features are interpretable and only very few of them are used for a single decision. For that matter, the final layer has to be sparse and – to make interpreting the features feasible – low dimensional. We call a model with a Sparse Low-Dimensional Decision “SLDD-Model”. We show that a SLDD-Model is easier to interpret locally and globally than a dense high-dimensional decision layer while being able to maintain competitive accuracy. Additionally, we propose a loss function that improves a model’s feature diversity and accuracy. Our more interpretable SLDD-Model only uses 5 out of just 50 features per class, while maintaining 97% to 100% of the accuracy on four common benchmark datasets compared to the baseline model with 2048 features.
Cite
Text
Norrenbrock et al. "Take 5: Interpretable Image Classification with a Handful of Features." NeurIPS 2022 Workshops: TEA, 2022.Markdown
[Norrenbrock et al. "Take 5: Interpretable Image Classification with a Handful of Features." NeurIPS 2022 Workshops: TEA, 2022.](https://mlanthology.org/neuripsw/2022/norrenbrock2022neuripsw-take/)BibTeX
@inproceedings{norrenbrock2022neuripsw-take,
title = {{Take 5: Interpretable Image Classification with a Handful of Features}},
author = {Norrenbrock, Thomas and Rudolph, Marco and Rosenhahn, Bodo},
booktitle = {NeurIPS 2022 Workshops: TEA},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/norrenbrock2022neuripsw-take/}
}