LEA: Learning Latent Embedding Alignment Model for fMRI Decoding and Encoding
Abstract
The connection between brain activity and visual stimuli is crucial to understanding the human brain. Although deep generative models have shown advances in recovering brain recordings by generating images conditioned on fMRI signals, it is still challenging to generate consistent semantics. Moreover, predicting fMRI signals from visual stimuli remains a hard problem. In this paper, we introduce a unified framework that addresses both fMRI decoding and encoding. We train two latent spaces to represent and reconstruct fMRI signals and visual images, respectively. By aligning these two latent spaces, we seamlessly transform between the fMRI signal and visual stimuli. Our model, called Latent Embedding Alignment (LEA), can recover visual stimuli from fMRI signals and predict brain activity from images. LEA outperforms existing methods on multiple fMRI decoding and encoding benchmarks. It offers a comprehensive solution for modeling the relationship between fMRI signals and visual stimuli. The codes are available at \url{https://github.com/naiq/LEA}.
Cite
Text
Qian et al. "LEA: Learning Latent Embedding Alignment Model for fMRI Decoding and Encoding." Transactions on Machine Learning Research, 2024.Markdown
[Qian et al. "LEA: Learning Latent Embedding Alignment Model for fMRI Decoding and Encoding." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/qian2024tmlr-lea/)BibTeX
@article{qian2024tmlr-lea,
title = {{LEA: Learning Latent Embedding Alignment Model for fMRI Decoding and Encoding}},
author = {Qian, Xuelin and Wang, Yikai and Sun, Xinwei and Fu, Yanwei and Xue, Xiangyang and Feng, Jianfeng},
journal = {Transactions on Machine Learning Research},
year = {2024},
url = {https://mlanthology.org/tmlr/2024/qian2024tmlr-lea/}
}