LIMESegment: Meaningful, Realistic Time Series Explanations

Abstract

LIME (Locally Interpretable Model-Agnostic Explanations) has become a popular way of generating explanations for tabular, image and natural language models, providing insight into why an instance was given a particular classification. In this paper we adapt LIME to time series classification, an under-explored area with existing approaches failing to account for the structure of this kind of data. We frame the non-trivial challenge of adapting LIME to time series classification as the following open questions: “What is a meaningful interpretable representation of a time series?”, “How does one realistically perturb a time series?” and “What is a local neighbourhood around a time series?”. We propose solutions to all three questions and combine them into a novel time series explanation framework called LIMESegment, which outperforms existing adaptations of LIME to time series on a variety of classification tasks.

Cite

Text

Sivill and Flach. " LIMESegment: Meaningful, Realistic Time Series Explanations ." Artificial Intelligence and Statistics, 2022.

Markdown

[Sivill and Flach. " LIMESegment: Meaningful, Realistic Time Series Explanations ." Artificial Intelligence and Statistics, 2022.](https://mlanthology.org/aistats/2022/sivill2022aistats-limesegment/)

BibTeX

@inproceedings{sivill2022aistats-limesegment,
  title     = {{ LIMESegment: Meaningful, Realistic Time Series Explanations }},
  author    = {Sivill, Torty and Flach, Peter},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2022},
  pages     = {3418-3433},
  volume    = {151},
  url       = {https://mlanthology.org/aistats/2022/sivill2022aistats-limesegment/}
}