Novel Topological Shapes of Model Interpretability

Abstract

The most accurate models can be the most challenging to interpret. This paper advances interpretability analysis by combining insights from $\texttt{Mapper}$ with recent interpretable machine-learning research. Enforcing new visualization constraints on $\texttt{Mapper}$, we produce a globally - to locally interpretable visualization of the Explainable Boosting Machine. We demonstrate the usefulness of our approach to three data sets: cervical cancer risk, propaganda Tweets, and a loan default data set that was artificially hardened with severe concept drift.

Cite

Text

van Veen. "Novel Topological Shapes of Model Interpretability." NeurIPS 2020 Workshops: TDA_and_Beyond, 2020.

Markdown

[van Veen. "Novel Topological Shapes of Model Interpretability." NeurIPS 2020 Workshops: TDA_and_Beyond, 2020.](https://mlanthology.org/neuripsw/2020/vanveen2020neuripsw-novel/)

BibTeX

@inproceedings{vanveen2020neuripsw-novel,
  title     = {{Novel Topological Shapes of Model Interpretability}},
  author    = {van Veen, Hendrik Jacob},
  booktitle = {NeurIPS 2020 Workshops: TDA_and_Beyond},
  year      = {2020},
  url       = {https://mlanthology.org/neuripsw/2020/vanveen2020neuripsw-novel/}
}