Multimodal Deep Transfer Learning for the Analysis of Optical Coherence Tomography Scans and Retinal Fundus Photographs
Abstract
Deep learning methods are increasingly applied to ophthalmologic scans in order to diagnose and prognosticate eye diseases, cardiovascular or renal outcomes. In this work, we create a multimodal deep learning model that combines retinal fundus photographs and optical coherence tomography scans and evaluate it in predictive tasks, matching state-of-the-art performance with a smaller dataset. We use saliency maps to showcase which sections of the eye morphology influence the model’s prediction and benchmark the performance of the multimodal model against algorithms that utilize only the individual modalities.
Cite
Text
Tsangalidou et al. "Multimodal Deep Transfer Learning for the Analysis of Optical Coherence Tomography Scans and Retinal Fundus Photographs." NeurIPS 2022 Workshops: LMRL, 2022.Markdown
[Tsangalidou et al. "Multimodal Deep Transfer Learning for the Analysis of Optical Coherence Tomography Scans and Retinal Fundus Photographs." NeurIPS 2022 Workshops: LMRL, 2022.](https://mlanthology.org/neuripsw/2022/tsangalidou2022neuripsw-multimodal/)BibTeX
@inproceedings{tsangalidou2022neuripsw-multimodal,
title = {{Multimodal Deep Transfer Learning for the Analysis of Optical Coherence Tomography Scans and Retinal Fundus Photographs}},
author = {Tsangalidou, Zoi and Fong, Edwin and Sundgaard, Josefine Vilsbøll and Abrahamsen, Trine Julie and Kvist, Kajsa},
booktitle = {NeurIPS 2022 Workshops: LMRL},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/tsangalidou2022neuripsw-multimodal/}
}