Towards Automatic Concept-Based Explanations
Abstract
Interpretability has become an important topic of research as more machine learning (ML) models are deployed and widely used to make important decisions. Most of the current explanation methods provide explanations through feature importance scores, which identify features that are important for each individual input. However, how to systematically summarize and interpret such per sample feature importance scores itself is challenging. In this work, we propose principles and desiderata for \emph{concept} based explanation, which goes beyond per-sample features to identify higher level human-understandable concepts that apply across the entire dataset. We develop a new algorithm, ACE, to automatically extract visual concepts. Our systematic experiments demonstrate that \alg discovers concepts that are human-meaningful, coherent and important for the neural network's predictions.
Cite
Text
Ghorbani et al. "Towards Automatic Concept-Based Explanations." Neural Information Processing Systems, 2019.Markdown
[Ghorbani et al. "Towards Automatic Concept-Based Explanations." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/ghorbani2019neurips-automatic/)BibTeX
@inproceedings{ghorbani2019neurips-automatic,
title = {{Towards Automatic Concept-Based Explanations}},
author = {Ghorbani, Amirata and Wexler, James and Zou, James Y and Kim, Been},
booktitle = {Neural Information Processing Systems},
year = {2019},
pages = {9277-9286},
url = {https://mlanthology.org/neurips/2019/ghorbani2019neurips-automatic/}
}