Joint Learning of Localized Representations from Medical Images and Reports

Abstract

Contrastive learning has proven effective for pre-training image models on unlabeled data with promising results for tasks such as medical image classification. Using paired text (like radiological reports) during pre-training improves the results even further. Still, most existing methods target image classification downstream tasks and may not be optimal for localized tasks like semantic segmentation or object detection. We therefore propose Localized representation learning from Vision and Text (LoVT), a text-supervised pre-training method that explicitly targets localized medical imaging tasks. Our method combines instance-level image-report contrastive learning with local contrastive learning on image region and report sentence representations. We evaluate LoVT and commonly used pre-training methods on an evaluation framework of 18 localized tasks on chest X-rays from five public datasets. LoVT performs best on 10 of the 18 studied tasks making it the preferred method of choice for localized tasks.

Cite

Text

Müller et al. "Joint Learning of Localized Representations from Medical Images and Reports." Proceedings of the European Conference on Computer Vision (ECCV), 2022. doi:10.1007/978-3-031-19809-0_39

Markdown

[Müller et al. "Joint Learning of Localized Representations from Medical Images and Reports." Proceedings of the European Conference on Computer Vision (ECCV), 2022.](https://mlanthology.org/eccv/2022/muller2022eccv-joint/) doi:10.1007/978-3-031-19809-0_39

BibTeX

@inproceedings{muller2022eccv-joint,
  title     = {{Joint Learning of Localized Representations from Medical Images and Reports}},
  author    = {Müller, Philip and Kaissis, Georgios and Zou, Congyu and Rueckert, Daniel},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2022},
  doi       = {10.1007/978-3-031-19809-0_39},
  url       = {https://mlanthology.org/eccv/2022/muller2022eccv-joint/}
}