Probing Conceptual Understanding of Large Visual-Language Models
Abstract
In recent years large visual-language (V+L) models have achieved great success in various downstream tasks. However, it is not well studied whether these models have a conceptual grasp of the visual content. In this work we focus on conceptual understanding of these large V+L models. To facilitate this study, we propose novel benchmarking datasets for probing three different aspects of content understanding, 1) relations, 2) composition, and 3) context. Our probes are grounded in cognitive science and help determine if a V+L model can, for example, determine if snow garnished with a man is implausible, or if it can identify beach furniture by knowing it is located on a beach. We experimented with many recent state-of-the-art V+L models and observe that these models mostly fail to demonstrate a conceptual understanding. This study reveals several interesting insights such as that cross-attention helps learning conceptual understanding, and that CNNs are better with texture and patterns, while Transformers are better at color and shape. We further utilize some of these insights and investigate a simple finetuning technique that rewards the three conceptual understanding measures with promising initial results. The proposed benchmarks will drive the community to delve deeper into conceptual understanding and foster advancements in the capabilities of large V+L models. The code and dataset is available at: https://tinyurl.com/vlm-robustness
Cite
Text
Schiappa et al. "Probing Conceptual Understanding of Large Visual-Language Models." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2024. doi:10.1109/CVPRW63382.2024.00186Markdown
[Schiappa et al. "Probing Conceptual Understanding of Large Visual-Language Models." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2024.](https://mlanthology.org/cvprw/2024/schiappa2024cvprw-probing/) doi:10.1109/CVPRW63382.2024.00186BibTeX
@inproceedings{schiappa2024cvprw-probing,
title = {{Probing Conceptual Understanding of Large Visual-Language Models}},
author = {Schiappa, Madeline and Abdullah, Raiyaan and Azad, Shehreen and Claypoole, Jared and Cogswell, Michael and Divakaran, Ajay and Rawat, Yogesh S.},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2024},
pages = {1797-1807},
doi = {10.1109/CVPRW63382.2024.00186},
url = {https://mlanthology.org/cvprw/2024/schiappa2024cvprw-probing/}
}