Explaining Chest X-Ray Pathology Models Using Textual Concepts

Abstract

Deep learning models have revolutionized medical imaging and diagnostics, yet their opaque nature poses challenges for clinical adoption and trust. Amongst approaches to improve model interpretability, concept-based explanations aim to provide concise and human-understandable explanations of any arbitrary classifier. However, such methods usually require a large amount of manually collected data with concept annotation, which is often scarce in the medical domain. In this paper, we propose Conceptual Counterfactual Explanations for Chest X-ray (CoCoX), which leverages the joint embedding space of an existing vision-language model (VLM) to explain black-box classifier outcomes without the need for annotated datasets. Specifically, we utilize textual concepts derived from chest radiography reports and a pre-trained chest radiography-based VLM to explain three common cardiothoracic pathologies. We demonstrate that the explanations generated by our method are semantically meaningful and faithful to underlying pathologies.

Cite

Text

Sadashivaiah et al. "Explaining Chest X-Ray Pathology Models Using Textual Concepts." NeurIPS 2024 Workshops: AIM-FM, 2024.

Markdown

[Sadashivaiah et al. "Explaining Chest X-Ray Pathology Models Using Textual Concepts." NeurIPS 2024 Workshops: AIM-FM, 2024.](https://mlanthology.org/neuripsw/2024/sadashivaiah2024neuripsw-explaining/)

BibTeX

@inproceedings{sadashivaiah2024neuripsw-explaining,
  title     = {{Explaining Chest X-Ray Pathology Models Using Textual Concepts}},
  author    = {Sadashivaiah, Vijay and Yan, Pingkun and Hendler, James},
  booktitle = {NeurIPS 2024 Workshops: AIM-FM},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/sadashivaiah2024neuripsw-explaining/}
}