MAEEG: Masked Auto-Encoder for EEG Representation Learning

Abstract

Decoding information from bio-signals such as EEG, using machine learning has been a challenge due to the small data-sets and difficulty to obtain labels. We propose a reconstruction-based self-supervised learning model, the masked auto-encoder for EEG (MAEEG), for learning EEG representations by learning to reconstruct the masked EEG features using a transformer architecture. We found that MAEEG can learn representations that significantly improve sleep stage classification (~5% accuracy increase) when only a small number of labels are given. We also found that input sample lengths and different ways of masking during reconstruction-based SSL pretraining have a huge effect on downstream model performance. Specifically, learning to reconstruct a larger proportion and more concentrated masked signal results in better performance on sleep classification. Our findings provide insight into how reconstruction-based SSL could help representation learning for EEG.

Cite

Text

Chien et al. "MAEEG: Masked Auto-Encoder for EEG Representation Learning." NeurIPS 2022 Workshops: TS4H, 2022.

Markdown

[Chien et al. "MAEEG: Masked Auto-Encoder for EEG Representation Learning." NeurIPS 2022 Workshops: TS4H, 2022.](https://mlanthology.org/neuripsw/2022/chien2022neuripsw-maeeg/)

BibTeX

@inproceedings{chien2022neuripsw-maeeg,
  title     = {{MAEEG: Masked Auto-Encoder for EEG Representation Learning}},
  author    = {Chien, Hsiang-Yun Sherry and Goh, Hanlin and Sandino, Christopher Michael and Cheng, Joseph Yitan},
  booktitle = {NeurIPS 2022 Workshops: TS4H},
  year      = {2022},
  url       = {https://mlanthology.org/neuripsw/2022/chien2022neuripsw-maeeg/}
}