Pre-Training Graph Contrastive Masked Autoencoders Are Strong Distillers for EEG
Abstract
Effectively utilizing extensive unlabeled high-density EEG data to improve performance in scenarios with limited labeled low-density EEG data presents a significant challenge. In this paper, we address this challenge by formulating it as a graph transfer learning and knowledge distillation problem. We propose a Unified Pre-trained Graph Contrastive Masked Autoencoder Distiller, named EEG-DisGCMAE, to bridge the gap between unlabeled and labeled as well as high- and low-density EEG data. Our approach introduces a novel unified graph self-supervised pre-training paradigm, which seamlessly integrates the graph contrastive pre-training with the graph masked autoencoder pre-training. Furthermore, we propose a graph topology distillation loss function, allowing a lightweight student model trained on low-density data to learn from a teacher model trained on high-density data during pre-training and fine-tuning. This method effectively handles missing electrodes through contrastive distillation. We validate the effectiveness of EEG-DisGCMAE across four classification tasks using two clinical EEG datasets with abundant data.
Cite
Text
Wei et al. "Pre-Training Graph Contrastive Masked Autoencoders Are Strong Distillers for EEG." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Wei et al. "Pre-Training Graph Contrastive Masked Autoencoders Are Strong Distillers for EEG." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/wei2025icml-pretraining/)BibTeX
@inproceedings{wei2025icml-pretraining,
title = {{Pre-Training Graph Contrastive Masked Autoencoders Are Strong Distillers for EEG}},
author = {Wei, Xinxu and Zhao, Kanhao and Jiao, Yong and Xie, Hua and He, Lifang and Zhang, Yu},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {66358-66377},
volume = {267},
url = {https://mlanthology.org/icml/2025/wei2025icml-pretraining/}
}