ChEX: Interactive Localization and Region Description in Chest X-Rays

Abstract

Report generation models offer fine-grained textual interpretations of medical images like chest X-rays, yet they often lack interactivity (the ability to steer the generation process through user queries) and localized interpretability (visually grounding their predictions), which we deem essential for future adoption in clinical practice. While there have been efforts to tackle these issues, they are either limited in their interactivity by not supporting textual queries or fail to also offer localized interpretability. Therefore, we propose a novel multitask architecture and training paradigm integrating textual prompts and bounding boxes for diverse aspects like anatomical regions and pathologies. We call this approach the Chest X-Ray Explainer (ChEX). Evaluations across a heterogeneous set of 9 chest X-ray tasks, including localized image interpretation and report generation, showcase its competitiveness with SOTA models while additional analysis demonstrates ChEX’s interactive capabilities. Code: https://github. com/philip-mueller/chex.

Cite

Text

Müller et al. "ChEX: Interactive Localization and Region Description in Chest X-Rays." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-72664-4_6

Markdown

[Müller et al. "ChEX: Interactive Localization and Region Description in Chest X-Rays." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/muller2024eccv-chex/) doi:10.1007/978-3-031-72664-4_6

BibTeX

@inproceedings{muller2024eccv-chex,
  title     = {{ChEX: Interactive Localization and Region Description in Chest X-Rays}},
  author    = {Müller, Philip and Kaissis, Georgios and Rueckert, Daniel},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2024},
  doi       = {10.1007/978-3-031-72664-4_6},
  url       = {https://mlanthology.org/eccv/2024/muller2024eccv-chex/}
}