Brain Decodes Deep Nets

Abstract

We developed a tool for visualizing and analyzing large pre-trained vision models by mapping them onto the brain thus exposing their hidden inside. Our innovation arises from a surprising usage of brain encoding: predicting brain fMRI measurements in response to images. We report two findings. First explicit mapping between the brain and deep-network features across dimensions of space layers scales and channels is crucial. This mapping method FactorTopy is plug-and-play for any deep-network; with it one can paint a picture of the network onto the brain (literally!). Second our visualization shows how different training methods matter: they lead to remarkable differences in hierarchical organization and scaling behavior growing with more data or network capacity. It also provides insight into fine-tuning: how pre-trained models change when adapting to small datasets. We found brain-like hierarchically organized network suffer less from catastrophic forgetting after fine-tuned.

Cite

Text

Yang et al. "Brain Decodes Deep Nets." Conference on Computer Vision and Pattern Recognition, 2024. doi:10.1109/CVPR52733.2024.02173

Markdown

[Yang et al. "Brain Decodes Deep Nets." Conference on Computer Vision and Pattern Recognition, 2024.](https://mlanthology.org/cvpr/2024/yang2024cvpr-brain/) doi:10.1109/CVPR52733.2024.02173

BibTeX

@inproceedings{yang2024cvpr-brain,
  title     = {{Brain Decodes Deep Nets}},
  author    = {Yang, Huzheng and Gee, James and Shi, Jianbo},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2024},
  pages     = {23030-23040},
  doi       = {10.1109/CVPR52733.2024.02173},
  url       = {https://mlanthology.org/cvpr/2024/yang2024cvpr-brain/}
}