Multitask Learning for Brain-Computer Interfaces
Abstract
Brain-computer interfaces (BCIs) are limited in their applicability in everyday settings by the current necessity to record subject-specific calibration data prior to actual use of the BCI for communication. In this paper, we utilize the framework of multitask learning to construct a BCI that can be used without any subject-specific calibration process. We discuss how this out-of-the-box BCI can be further improved in a computationally efficient manner as subject-specific data becomes available. The feasibility of the approach is demonstrated on two sets of experimental EEG data recorded during a standard two-class motor imagery paradigm from a total of 19 healthy subjects. Specifically, we show that satisfactory classification results can be achieved with zero training data, and combining prior recordings with subject-specific calibration data substantially outperforms using subject-specific data only. Our results further show that transfer between recordings under slightly different experimental setups is feasible.
Cite
Text
Alamgir et al. "Multitask Learning for Brain-Computer Interfaces." Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 2010.Markdown
[Alamgir et al. "Multitask Learning for Brain-Computer Interfaces." Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 2010.](https://mlanthology.org/aistats/2010/alamgir2010aistats-multitask/)BibTeX
@inproceedings{alamgir2010aistats-multitask,
title = {{Multitask Learning for Brain-Computer Interfaces}},
author = {Alamgir, Morteza and Grosse–Wentrup, Moritz and Altun, Yasemin},
booktitle = {Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics},
year = {2010},
pages = {17-24},
volume = {9},
url = {https://mlanthology.org/aistats/2010/alamgir2010aistats-multitask/}
}