NeuroLM: A Universal Multi-Task Foundation Model for Bridging the Gap Between Language and EEG Signals
Abstract
Recent advancements for large-scale pre-training with neural signals such as electroencephalogram (EEG) have shown promising results, significantly boosting the development of brain-computer interfaces (BCIs) and healthcare. However, these pre-trained models often require full fine-tuning on each downstream task to achieve substantial improvements, limiting their versatility and usability, and leading to considerable resource wastage. To tackle these challenges, we propose NeuroLM, the first multi-task foundation model that leverages the capabilities of Large Language Models (LLMs) by regarding EEG signals as a foreign language, endowing the model with multi-task learning and inference capabilities. Our approach begins with learning a text-aligned neural tokenizer through vector-quantized temporal-frequency prediction, which encodes EEG signals into discrete neural tokens. These EEG tokens, generated by the frozen vector-quantized (VQ) encoder, are then fed into an LLM that learns causal EEG information via multi-channel autoregression. Consequently, NeuroLM can understand both EEG and language modalities. Finally, multi-task instruction tuning adapts NeuroLM to various downstream tasks. We are the first to demonstrate that, by specific incorporation with LLMs, NeuroLM unifies diverse EEG tasks within a single model through instruction tuning. The largest variant NeuroLM-XL has record-breaking 1.7B parameters for EEG signal processing, and is pre-trained on a large-scale corpus comprising approximately 25,000-hour EEG data. When evaluated on six diverse downstream datasets, NeuroLM showcases the huge potential of this multi-task learning paradigm.
Cite
Text
Jiang et al. "NeuroLM: A Universal Multi-Task Foundation Model for Bridging the Gap Between Language and EEG Signals." International Conference on Learning Representations, 2025.Markdown
[Jiang et al. "NeuroLM: A Universal Multi-Task Foundation Model for Bridging the Gap Between Language and EEG Signals." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/jiang2025iclr-neurolm/)BibTeX
@inproceedings{jiang2025iclr-neurolm,
title = {{NeuroLM: A Universal Multi-Task Foundation Model for Bridging the Gap Between Language and EEG Signals}},
author = {Jiang, Weibang and Wang, Yansen and Lu, Bao-liang and Li, Dongsheng},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/jiang2025iclr-neurolm/}
}