Explaining Multiclass Classifiers with Categorical Values: A Case Study in Radiography
Abstract
Explainability of machine learning methods is of fundamental importance in healthcare to calibrate trust. A large branch of explainable machine learning uses tools linked to the Shapley value, which have nonetheless been found difficult to interpret and potentially misleading. Taking multiclass classification as a refer- ence task, we argue that a critical issue in these methods is that they disregard the structure of the model outputs. We develop the Categorical Shapley value as a theoretically-grounded method to explain the output of multiclass classifiers, in terms of transition (or flipping) probabilities across classes. We demonstrate on a case study composed of three example scenarios for pneumonia detection and subtyping using X-ray images.
Cite
Text
Franceschi et al. "Explaining Multiclass Classifiers with Categorical Values: A Case Study in Radiography." ICLR 2023 Workshops: TML4H, 2023.Markdown
[Franceschi et al. "Explaining Multiclass Classifiers with Categorical Values: A Case Study in Radiography." ICLR 2023 Workshops: TML4H, 2023.](https://mlanthology.org/iclrw/2023/franceschi2023iclrw-explaining/)BibTeX
@inproceedings{franceschi2023iclrw-explaining,
title = {{Explaining Multiclass Classifiers with Categorical Values: A Case Study in Radiography}},
author = {Franceschi, Luca and Zor, Cemre and Zafar, Muhammad Bilal and Detommaso, Gianluca and Archambeau, Cedric and Madl, Tamas and Donini, Michele and Seeger, Matthias},
booktitle = {ICLR 2023 Workshops: TML4H},
year = {2023},
url = {https://mlanthology.org/iclrw/2023/franceschi2023iclrw-explaining/}
}