Population Transformer: Learning Population-Level Representations of Intracranial Activity

Abstract

We present a self-supervised framework that learns population-level codes for arbitrary ensembles of neural recordings. We address key challenges in scaling models with neural time-series data, namely, sparse and variable electrode distribution across subjects and datasets. The Population Transformer (PopT) stacks on top of pretrained representations and enhances downstream decoding by enabling learned aggregation of multiple spatially-sparse data channels. The pretrained PopT lowers the amount of data required for downstream decoding experiments, while increasing accuracy, even on held-out subjects and tasks. Beyond decoding, we interpret the pretrained PopT and fine-tuned models to show how they can be used to extract neuroscience insights from massive amounts of data. We release our code as well as a pretrained PopT to enable off-the-shelf improvements in multi-channel intracranial data decoding and interpretability.

Cite

Text

Chau et al. "Population Transformer: Learning Population-Level Representations of Intracranial Activity." NeurIPS 2024 Workshops: NeuroAI, 2024.

Markdown

[Chau et al. "Population Transformer: Learning Population-Level Representations of Intracranial Activity." NeurIPS 2024 Workshops: NeuroAI, 2024.](https://mlanthology.org/neuripsw/2024/chau2024neuripsw-population/)

BibTeX

@inproceedings{chau2024neuripsw-population,
  title     = {{Population Transformer: Learning Population-Level Representations of Intracranial Activity}},
  author    = {Chau, Geeling and Wang, Christopher and Talukder, Sabera J and Subramaniam, Vighnesh and Soedarmadji, Saraswati and Yue, Yisong and Katz, Boris and Barbu, Andrei},
  booktitle = {NeurIPS 2024 Workshops: NeuroAI},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/chau2024neuripsw-population/}
}