Contrastive Pre-Training for Multimodal Medical Time Series
Abstract
Clinical time series data are highly rich and provide significant information about a patient's physiological state. However, these time series can be complex to model, particularly when they consist of multimodal data measured at different resolutions. Most existing methods to learn representations of these data consider only tabular time series (e.g., lab measurements and vitals signs), and do not naturally extend to modelling a full, multimodal time series. In this work, we propose a contrastive pre-training strategy to learn representations of multimodal time series. We consider a setting where the time series contains sequences of (1) high-frequency electrocardiograms and (2) structured data from labs and vitals. We outline a strategy to generate augmentations of these data for contrastive learning, building on recent work in representation learning for medical data. We evaluate our method on a real-world dataset, finding it obtains improved or competitive performance when compared to baselines on two downstream tasks.
Cite
Text
Raghu et al. "Contrastive Pre-Training for Multimodal Medical Time Series." NeurIPS 2022 Workshops: TS4H, 2022.Markdown
[Raghu et al. "Contrastive Pre-Training for Multimodal Medical Time Series." NeurIPS 2022 Workshops: TS4H, 2022.](https://mlanthology.org/neuripsw/2022/raghu2022neuripsw-contrastive/)BibTeX
@inproceedings{raghu2022neuripsw-contrastive,
title = {{Contrastive Pre-Training for Multimodal Medical Time Series}},
author = {Raghu, Aniruddh and Chandak, Payal and Alam, Ridwan and Guttag, John and Stultz, Collin},
booktitle = {NeurIPS 2022 Workshops: TS4H},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/raghu2022neuripsw-contrastive/}
}