Population Transformer: Learning Population-Level Representations of Intracranial Activity
Abstract
We present a self-supervised framework that learns population-level codes for intracranial neural recordings at scale, unlocking the benefits of representation learning for a key neuroscience recording modality. The Population Transformer (PopT) lowers the amount of data required for decoding experiments, while increasing accuracy, even on never-before-seen subjects and tasks. We address two key challenges in developing PopT: sparse electrode distribution and varying electrode location across patients. PopT stacks on top of pretrained representations and enhances downstream tasks by enabling learned aggregation of multiple spatially-sparse data channels. Beyond decoding, we interpret the pretrained PopT and fine-tuned models to show how it can be used to provide neuroscience insights learned from massive amounts of data. We release a pretrained PopT to enable off-the-shelf improvements in multi-channel intracranial data decoding and interpretability.
Cite
Text
Chau et al. "Population Transformer: Learning Population-Level Representations of Intracranial Activity." ICML 2024 Workshops: AI4Science, 2024.Markdown
[Chau et al. "Population Transformer: Learning Population-Level Representations of Intracranial Activity." ICML 2024 Workshops: AI4Science, 2024.](https://mlanthology.org/icmlw/2024/chau2024icmlw-population/)BibTeX
@inproceedings{chau2024icmlw-population,
title = {{Population Transformer: Learning Population-Level Representations of Intracranial Activity}},
author = {Chau, Geeling and Wang, Christopher and Talukder, Sabera J and Subramaniam, Vighnesh and Soedarmadji, Saraswati and Yue, Yisong and Katz, Boris and Barbu, Andrei},
booktitle = {ICML 2024 Workshops: AI4Science},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/chau2024icmlw-population/}
}