Eye-Gaze Guided Multi-Modal Alignment for Medical Representation Learning

Abstract

In the medical multi-modal frameworks, the alignment of cross-modality features presents a significant challenge. However, existing works have learned features that are implicitly aligned from the data, without considering the explicit relationships in the medical context. This data-reliance may lead to low generalization of the learned alignment relationships. In this work, we propose the Eye-gaze Guided Multi-modal Alignment (EGMA) framework to harness eye-gaze data for better alignment of medical visual and textual features. We explore the natural auxiliary role of radiologists' eye-gaze data in aligning medical images and text, and introduce a novel approach by using eye-gaze data, collected synchronously by radiologists during diagnostic evaluations. We conduct downstream tasks of image classification and image-text retrieval on four medical datasets, where EGMA achieved state-of-the-art performance and stronger generalization across different datasets. Additionally, we explore the impact of varying amounts of eye-gaze data on model performance, highlighting the feasibility and utility of integrating this auxiliary data into multi-modal alignment framework.

Cite

Text

Ma et al. "Eye-Gaze Guided Multi-Modal Alignment for Medical Representation Learning." Neural Information Processing Systems, 2024. doi:10.52202/079017-0198

Markdown

[Ma et al. "Eye-Gaze Guided Multi-Modal Alignment for Medical Representation Learning." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/ma2024neurips-eyegaze/) doi:10.52202/079017-0198

BibTeX

@inproceedings{ma2024neurips-eyegaze,
  title     = {{Eye-Gaze Guided Multi-Modal Alignment for Medical Representation Learning}},
  author    = {Ma, Chong and Jiang, Hanqi and Chen, Wenting and Li, Yiwei and Wu, Zihao and Yu, Xiaowei and Liu, Zhengliang and Guo, Lei and Zhu, Dajiang and Zhang, Tuo and Shen, Dinggang and Liu, Tianming and Li, Xiang},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-0198},
  url       = {https://mlanthology.org/neurips/2024/ma2024neurips-eyegaze/}
}