LIMITR: Leveraging Local Information for Medical Image-Text Representation
Abstract
Medical imaging analysis plays a critical role in the diagnosis and treatment of various medical conditions. This paper focuses on chest X-ray images and their corresponding radiological reports. It presents a new model that learns a joint X-ray image & report representation. The model is based on a novel alignment scheme between the visual data and the text, which takes into account both local and global information. Furthermore, the model integrates domain-specific information of two types -- lateral images and the consistent visual structure of chest images. Our representation is shown to benefit three types of retrieval tasks: text-image retrieval, class-based retrieval, and phrase-grounding.
Cite
Text
Dawidowicz et al. "LIMITR: Leveraging Local Information for Medical Image-Text Representation." International Conference on Computer Vision, 2023. doi:10.1109/ICCV51070.2023.01935Markdown
[Dawidowicz et al. "LIMITR: Leveraging Local Information for Medical Image-Text Representation." International Conference on Computer Vision, 2023.](https://mlanthology.org/iccv/2023/dawidowicz2023iccv-limitr/) doi:10.1109/ICCV51070.2023.01935BibTeX
@inproceedings{dawidowicz2023iccv-limitr,
title = {{LIMITR: Leveraging Local Information for Medical Image-Text Representation}},
author = {Dawidowicz, Gefen and Hirsch, Elad and Tal, Ayellet},
booktitle = {International Conference on Computer Vision},
year = {2023},
pages = {21165-21173},
doi = {10.1109/ICCV51070.2023.01935},
url = {https://mlanthology.org/iccv/2023/dawidowicz2023iccv-limitr/}
}