Self-Supervised Learning to Predict Ejection Fraction Using Motion-Mode Images

Abstract

Data scarcity is a fundamental problem since data lies at the heart of any ML project. For most applications, annotation is an expensive task in addition to data collection. Thus, learning from limited labeled data is very critical for data-limited problems, such as in healthcare applications, to have the ability to learn in a sample-efficient manner. Self-supervised learning (SSL) can learn meaningful representations from exploiting structures in unlabeled data, which allows the model to achieve high accuracy in various downstream tasks, even with limited annotations. In this work, we extend contrastive learning, an efficient implementation of SSL, to cardiac imaging. We propose to use generated M(otion)-mode images from readily available B(rightness)-mode echocardiograms and design contrastive objectives with structure and patient-awareness. Experiments on EchoNet-Dynamic show that our proposed model can achieve an AUROC score of 0.85 by simply training a linear head on top of the learned representations, and is insensitive to the reduction of labeled data.

Cite

Text

Hu et al. "Self-Supervised Learning to Predict Ejection Fraction Using Motion-Mode Images." ICLR 2023 Workshops: MLGH, 2023.

Markdown

[Hu et al. "Self-Supervised Learning to Predict Ejection Fraction Using Motion-Mode Images." ICLR 2023 Workshops: MLGH, 2023.](https://mlanthology.org/iclrw/2023/hu2023iclrw-selfsupervised/)

BibTeX

@inproceedings{hu2023iclrw-selfsupervised,
  title     = {{Self-Supervised Learning to Predict Ejection Fraction Using Motion-Mode Images}},
  author    = {Hu, Yurong and Sutter, Thomas M. and Ozkan, Ece and Vogt, Julia E},
  booktitle = {ICLR 2023 Workshops: MLGH},
  year      = {2023},
  url       = {https://mlanthology.org/iclrw/2023/hu2023iclrw-selfsupervised/}
}