Explanation Shift: Detecting Distribution Shifts on Tabular Data via the Explanation Space
Abstract
As input data distributions evolve, the predictive performance of machine learning models tends to deteriorate. In the past, predictive performance was considered the key indicator to monitor. However, explanation aspects have come to attention within the last years. In this work, we investigate how model predictive performance and model explanation characteristics are affected under distribution shifts and how these key indicators are related to each other for tabular data. We find that the modeling of explanation shifts can be a better indicator for the detection of predictive performance changes than state-of-the-art techniques based on representations of distribution shifts. We provide a mathematical analysis of different types of distribution shifts as well as synthetic experimental examples.
Cite
Text
Mougan et al. "Explanation Shift: Detecting Distribution Shifts on Tabular Data via the Explanation Space." NeurIPS 2022 Workshops: DistShift, 2022.Markdown
[Mougan et al. "Explanation Shift: Detecting Distribution Shifts on Tabular Data via the Explanation Space." NeurIPS 2022 Workshops: DistShift, 2022.](https://mlanthology.org/neuripsw/2022/mougan2022neuripsw-explanation/)BibTeX
@inproceedings{mougan2022neuripsw-explanation,
title = {{Explanation Shift: Detecting Distribution Shifts on Tabular Data via the Explanation Space}},
author = {Mougan, Carlos and Broelemann, Klaus and Kasneci, Gjergji and Tiropanis, Thanassis and Staab, Steffen},
booktitle = {NeurIPS 2022 Workshops: DistShift},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/mougan2022neuripsw-explanation/}
}