Example or Prototype? Learning Concept-Based Explanations in Time-Series
Abstract
With the continuous increase of deep learning applications in safety critical systems, the need for an interpretable decision-making process has become a priority within the research community. While there are many existing explainable artificial intelligence algorithms, a systematic assessment of the suitability of global explanation methods for different applications is not available. In this paper, we respond to this demand by systematically comparing two existing global concept-based explanation methods with our proposed global, model-agnostic concept-based explanation method for time-series data. This method is based on an autoencoder structure and derives abstract global explanations called "prototypes". The results of a human user study and a quantitative analysis show a superior performance of the proposed method, but also highlight the necessity of tailoring explanation methods to the target audience of machine learning models.
Cite
Text
Obermair et al. "Example or Prototype? Learning Concept-Based Explanations in Time-Series." Proceedings of The 14th Asian Conference on Machine Learning, 2022.Markdown
[Obermair et al. "Example or Prototype? Learning Concept-Based Explanations in Time-Series." Proceedings of The 14th Asian Conference on Machine Learning, 2022.](https://mlanthology.org/acml/2022/obermair2022acml-example/)BibTeX
@inproceedings{obermair2022acml-example,
title = {{Example or Prototype? Learning Concept-Based Explanations in Time-Series}},
author = {Obermair, Christoph and Fuchs, Alexander and Pernkopf, Franz and Felsberger, Lukas and Apollonio, Andrea and Wollmann, Daniel},
booktitle = {Proceedings of The 14th Asian Conference on Machine Learning},
year = {2022},
pages = {816-831},
volume = {189},
url = {https://mlanthology.org/acml/2022/obermair2022acml-example/}
}