Self-Supervised Transformers for fMRI Representation
Abstract
We present TFF, which is a Transformer framework for the analysis of functional Magnetic Resonance Imaging (fMRI) data. TFF employs a two-phase training approach. First, self-supervised training is applied to a collection of fMRI scans, where the model is trained to reconstruct 3D volume data. Second, the pre-trained model is fine-tuned on specific tasks, utilizing ground truth labels. Our results show state-of-the-art performance on a variety of fMRI tasks, including age and gender prediction, as well as schizophrenia recognition. Our code for the training, network architecture, and results is attached as supplementary material.
Cite
Text
Malkiel et al. "Self-Supervised Transformers for fMRI Representation." Medical Imaging with Deep Learning, 2023.Markdown
[Malkiel et al. "Self-Supervised Transformers for fMRI Representation." Medical Imaging with Deep Learning, 2023.](https://mlanthology.org/midl/2023/malkiel2023midl-selfsupervised/)BibTeX
@inproceedings{malkiel2023midl-selfsupervised,
title = {{Self-Supervised Transformers for fMRI Representation}},
author = {Malkiel, Itzik and Rosenman, Gony and Wolf, Lior and Hendler, Talma},
booktitle = {Medical Imaging with Deep Learning},
year = {2023},
pages = {895-913},
volume = {172},
url = {https://mlanthology.org/midl/2023/malkiel2023midl-selfsupervised/}
}