Explaining High-Dimensional Text Classifiers
Abstract
Explainability has become a valuable tool in the last few years, helping humans better understand AI-guided decisions. However, the classic explainability tools are sometimes quite limited when considering high-dimensional inputs and neural network classifiers. We present a new explainability method using theoretically proven high-dimensional properties in neural network classifiers. We present two usages of it: 1) On the classical sentiment analysis task for the IMDB reviews dataset, and 2) our Malware-Detection task for our PowerShell scripts dataset.
Cite
Text
Melamed and Caruana. "Explaining High-Dimensional Text Classifiers." NeurIPS 2023 Workshops: XAIA, 2023.Markdown
[Melamed and Caruana. "Explaining High-Dimensional Text Classifiers." NeurIPS 2023 Workshops: XAIA, 2023.](https://mlanthology.org/neuripsw/2023/melamed2023neuripsw-explaining/)BibTeX
@inproceedings{melamed2023neuripsw-explaining,
title = {{Explaining High-Dimensional Text Classifiers}},
author = {Melamed, Odelia and Caruana, Rich},
booktitle = {NeurIPS 2023 Workshops: XAIA},
year = {2023},
url = {https://mlanthology.org/neuripsw/2023/melamed2023neuripsw-explaining/}
}