Self-Interpretable Time Series Prediction with Counterfactual Explanations
Abstract
Interpretable time series prediction is crucial for safety-critical areas such as healthcare and autonomous driving. Most existing methods focus on interpreting predictions by assigning important scores to segments of time series. In this paper, we take a different and more challenging route and aim at developing a self-interpretable model, dubbed Counterfactual Time Series (CounTS), which generates counterfactual and actionable explanations for time series predictions. Specifically, we formalize the problem of time series counterfactual explanations, establish associated evaluation protocols, and propose a variational Bayesian deep learning model equipped with counterfactual inference capability of time series abduction, action, and prediction. Compared with state-of-the-art baselines, our self-interpretable model can generate better counterfactual explanations while maintaining comparable prediction accuracy.
Cite
Text
Yan and Wang. "Self-Interpretable Time Series Prediction with Counterfactual Explanations." International Conference on Machine Learning, 2023.Markdown
[Yan and Wang. "Self-Interpretable Time Series Prediction with Counterfactual Explanations." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/yan2023icml-selfinterpretable/)BibTeX
@inproceedings{yan2023icml-selfinterpretable,
title = {{Self-Interpretable Time Series Prediction with Counterfactual Explanations}},
author = {Yan, Jingquan and Wang, Hao},
booktitle = {International Conference on Machine Learning},
year = {2023},
pages = {39110-39125},
volume = {202},
url = {https://mlanthology.org/icml/2023/yan2023icml-selfinterpretable/}
}