V1T: Large-Scale Mouse V1 Response Prediction Using a Vision Transformer
Abstract
Accurate predictive models of the visual cortex neural response to natural visual stimuli remain a challenge in computational neuroscience. In this work, we introduce $V{\small 1}T$, a novel Vision Transformer based architecture that learns a shared visual and behavioral representation across animals. We evaluate our model on two large datasets recorded from mouse primary visual cortex and outperform previous convolution-based models by more than 12.7% in prediction performance. Moreover, we show that the self-attention weights learned by the Transformer correlate with the population receptive fields. Our model thus sets a new benchmark for neural response prediction and can be used jointly with behavioral and neural recordings to reveal meaningful characteristic features of the visual cortex.
Cite
Text
Li et al. "V1T: Large-Scale Mouse V1 Response Prediction Using a Vision Transformer." Transactions on Machine Learning Research, 2023.Markdown
[Li et al. "V1T: Large-Scale Mouse V1 Response Prediction Using a Vision Transformer." Transactions on Machine Learning Research, 2023.](https://mlanthology.org/tmlr/2023/li2023tmlr-v1t/)BibTeX
@article{li2023tmlr-v1t,
title = {{V1T: Large-Scale Mouse V1 Response Prediction Using a Vision Transformer}},
author = {Li, Bryan M. and Cornacchia, Isabel Maria and Rochefort, Nathalie and Onken, Arno},
journal = {Transactions on Machine Learning Research},
year = {2023},
url = {https://mlanthology.org/tmlr/2023/li2023tmlr-v1t/}
}