MIND: Modality-Informed Knowledge Distillation Framework for Multimodal Clinical Prediction Tasks
Abstract
Multimodal fusion leverages information across modalities to learn better feature representations with the goal of improving performance in fusion-based tasks. However, multimodal datasets, especially in medical settings, are typically smaller than their unimodal counterparts, which can impede the performance of multimodal models. Additionally, the increase in the number of modalities is often associated with an overall increase in the size of the multimodal network, which may be undesirable in medical use cases. Utilizing smaller unimodal encoders may lead to sub-optimal performance, particularly when dealing with high-dimensional clinical data. In this paper, we propose the Modality-INformed knowledge Distillation (MIND) framework, a multimodal model compression approach based on knowledge distillation that transfers knowledge from ensembles of pre-trained deep neural networks of varying sizes into a smaller multimodal student. The teacher models consist of unimodal networks, allowing the student to learn from diverse representations. MIND employs multi-head joint fusion models, as opposed to single-head models, enabling the utilization of unimodal encoders in the case of unimodal samples without requiring imputation or masking of absent modalities. As a result, MIND generates an optimized multimodal model, enhancing both multimodal and unimodal representations. It can also be leveraged to balance multimodal learning during training. We evaluate MIND on binary classification and multilabel clinical prediction tasks using clinical time series data and chest X-ray images extracted from publicly available datasets. Additionally, we assess the generalizability of the MIND framework on three non-medical multimodal multiclass benchmark datasets. The experimental results demonstrate that MIND enhances the performance of the smaller multimodal network across all five tasks, as well as various fusion methods and multimodal network architectures, compared to several state-of-the-art baselines.
Cite
Text
Guerra-Manzanares and Shamout. "MIND: Modality-Informed Knowledge Distillation Framework for Multimodal Clinical Prediction Tasks." Transactions on Machine Learning Research, 2025.Markdown
[Guerra-Manzanares and Shamout. "MIND: Modality-Informed Knowledge Distillation Framework for Multimodal Clinical Prediction Tasks." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/guerramanzanares2025tmlr-mind/)BibTeX
@article{guerramanzanares2025tmlr-mind,
title = {{MIND: Modality-Informed Knowledge Distillation Framework for Multimodal Clinical Prediction Tasks}},
author = {Guerra-Manzanares, Alejandro and Shamout, Farah},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://mlanthology.org/tmlr/2025/guerramanzanares2025tmlr-mind/}
}