Reproducibility Study of "Learning Perturbations to Explain Time Series Predictions"

Abstract

In this work, we attempt to reproduce the results of Enguehard (2023), which introduced ExtremalMask, a mask-based perturbation method for explaining time series data. We investigated the key claims of this paper, namely that (1) the model outperformed other models in several key metrics on both synthetic and real data, and (2) the model performed better when using the loss function of the preservation game relative to that of the deletion game. Although discrepancies exist, our results generally support the core of the original paper’s conclusions. Next, we interpret ExtremalMask’s outputs using new visualizations and metrics and discuss the insights each interpretation provides. Finally, we test whether ExtremalMask create out of distribution samples, and found the model does not exhibit this flaw on our tested synthetic dataset. Overall, our results support and add nuance to the original paper’s findings.

Cite

Text

Fan et al. "Reproducibility Study of "Learning Perturbations to Explain Time Series Predictions"." Transactions on Machine Learning Research, 2024.

Markdown

[Fan et al. "Reproducibility Study of "Learning Perturbations to Explain Time Series Predictions"." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/fan2024tmlr-reproducibility/)

BibTeX

@article{fan2024tmlr-reproducibility,
  title     = {{Reproducibility Study of "Learning Perturbations to Explain Time Series Predictions"}},
  author    = {Fan, Jiapeng and Cadigan, Luke and Skaisgiris, Paulius and Arias, Sebastian Uriel},
  journal   = {Transactions on Machine Learning Research},
  year      = {2024},
  url       = {https://mlanthology.org/tmlr/2024/fan2024tmlr-reproducibility/}
}