Vector Quantization Pretraining for EEG Time Series with Random Projection and Phase Alignment
Abstract
In this paper, we propose a BERT-style self-supervised learning model, VQ-MTM (Vector Quantization Masked Time-Series Modeling), for the EEG time series data analysis. At its core, VQ-MTM comprises a theoretically grounded random-projection quantization module and a phase-aligning module guided by the Time-Phase-Shift Equivariance of Fourier Transform, the two modules can generate well-defined semantic units (akin to words in natural language) for the corrupted and periodic time series, thus offering robust and consistent learning signals for the EEG self-supervised learning. VQ-MTM also owns low model complexity and can easily adapt to large-scale datasets. We conduct experiments on five real-world datasets including two large-scale datasets to verify the efficacy of our proposed model, the experiment results show that VQ-MTM is able to consistently surpass the existing methods by large margins on both seizure detection and classification tasks. Our code is available at https://github.com/HaokunGUI/VQ_MTM.
Cite
Text
Gui et al. "Vector Quantization Pretraining for EEG Time Series with Random Projection and Phase Alignment." International Conference on Machine Learning, 2024.Markdown
[Gui et al. "Vector Quantization Pretraining for EEG Time Series with Random Projection and Phase Alignment." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/gui2024icml-vector/)BibTeX
@inproceedings{gui2024icml-vector,
title = {{Vector Quantization Pretraining for EEG Time Series with Random Projection and Phase Alignment}},
author = {Gui, Haokun and Li, Xiucheng and Chen, Xinyang},
booktitle = {International Conference on Machine Learning},
year = {2024},
pages = {16731-16750},
volume = {235},
url = {https://mlanthology.org/icml/2024/gui2024icml-vector/}
}