Towards Neural Foundation Models for Vision: Aligning EEG, MEG and fMRI Representations to Perform Decoding, Encoding and Modality Conversion

Abstract

This paper presents a novel approach towards to the creation of a foundational model for aligning neural data and visual stimuli representations by leveraging the power of contrastive learning. We worked with EEG, MEG and fMRI. The capabilities of our framework are showcased through three key experiments: decoding visual information from neural data, encoding images into neural representations, and converting between neural modalities. The results demonstrate the model's ability to accurately capture semantic information across different brain imaging techniques, illustrating its potential in decoding, encoding, and modality conversion tasks.

Cite

Text

Ferrante et al. "Towards Neural Foundation Models for Vision: Aligning EEG, MEG and fMRI Representations to Perform Decoding, Encoding and Modality Conversion." ICLR 2024 Workshops: Re-Align, 2024.

Markdown

[Ferrante et al. "Towards Neural Foundation Models for Vision: Aligning EEG, MEG and fMRI Representations to Perform Decoding, Encoding and Modality Conversion." ICLR 2024 Workshops: Re-Align, 2024.](https://mlanthology.org/iclrw/2024/ferrante2024iclrw-neural/)

BibTeX

@inproceedings{ferrante2024iclrw-neural,
  title     = {{Towards Neural Foundation Models for Vision: Aligning EEG, MEG and fMRI Representations to Perform Decoding, Encoding and Modality Conversion}},
  author    = {Ferrante, Matteo and Boccato, Tommaso and Toschi, Nicola},
  booktitle = {ICLR 2024 Workshops: Re-Align},
  year      = {2024},
  url       = {https://mlanthology.org/iclrw/2024/ferrante2024iclrw-neural/}
}