CLIP-MSM: A Multi-Semantic Mapping Brain Representation for Human High-Level Visual Cortex
Abstract
Prior work employing deep neural networks (DNNs) with explainable techniques has identified human visual cortical selective representation to specific categories. However, constructing high-performing encoding models that accurately capture brain responses to coexisting multi-semantics remains elusive. Here, we used CLIP models combined with CLIP Dissection to establish a multi-semantic mapping framework (CLIP-MSM) for hypothesis-free analysis in human high-level visual cortex. First, we utilize CLIP models to construct voxel-wise encoding models for predicting visual cortical responses to natural scene images. Then, we apply CLIP Dissection and normalize the semantic mapping score to achieve the mapping of single brain voxels to multiple semantics. Our findings indicate that CLIP Dissection applied to DNNs modeling the human high-level visual cortex demonstrates better interpretability accuracy compared to Network Dissection. In addition, to demonstrate how our method enables fine-grained discovery in hypothesis-free analysis, we quantify the accuracy between CLIP-MSM’s reconstructed brain activation in response to categories of faces, bodies, places, words and food, and the ground truth of brain activation. We demonstrate that CLIP-MSM provides more accurate predictions of visual responses compared to CLIP Dissection. Our results have been validated using two large natural image datasets: the Natural Scenes Dataset (NSD) and the Natural Object Dataset (NOD).
Cite
Text
Yang et al. "CLIP-MSM: A Multi-Semantic Mapping Brain Representation for Human High-Level Visual Cortex." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I9.32994Markdown
[Yang et al. "CLIP-MSM: A Multi-Semantic Mapping Brain Representation for Human High-Level Visual Cortex." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/yang2025aaai-clip-a/) doi:10.1609/AAAI.V39I9.32994BibTeX
@inproceedings{yang2025aaai-clip-a,
title = {{CLIP-MSM: A Multi-Semantic Mapping Brain Representation for Human High-Level Visual Cortex}},
author = {Yang, Guoyuan and Xue, Mufan and Mao, Ziming and Zheng, Haofang and Xu, Jia and Sheng, Dabin and Sun, Ruotian and Yang, Ruoqi and Li, Xuesong},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2025},
pages = {9184-9192},
doi = {10.1609/AAAI.V39I9.32994},
url = {https://mlanthology.org/aaai/2025/yang2025aaai-clip-a/}
}