Behavioral Bias of Vision-Language Models: A Behavioral Finance View
Abstract
Large Vision-Language Models (LVLMs) evolve rapidly as Large Language Models (LLMs) was equipped with vision modules to create more human-like models. However, we should carefully evaluate their applications in different domains, as they may posses undesired biases. Our work studies the potential behavioral biases of LVLMs from a behavioral finance perspective, an interdisciplinary subject that jointly considers finance and psychology. We propose an end-to-end framework, from data collection to new evaluation metrics, to assess LVLMs' reasoning capabilities and the dynamic behaviors manifested in two established human financial behavioral biases: recency bias and authority bias. Our evaluations find that recent open-source LVLMs such as LLaVA-NeXT, MobileVLM-V2, Mini-Gemini, MiniCPM-Llama3-V 2.5 and Phi-3-vision-128k suffer significantly from these two biases, while the proprietary model GPT-4o is negligibly impacted. Our observations highlight directions in which open-source models can improve.
Cite
Text
Xiao et al. "Behavioral Bias of Vision-Language Models: A Behavioral Finance View." ICML 2024 Workshops: LLMs_and_Cognition, 2024.Markdown
[Xiao et al. "Behavioral Bias of Vision-Language Models: A Behavioral Finance View." ICML 2024 Workshops: LLMs_and_Cognition, 2024.](https://mlanthology.org/icmlw/2024/xiao2024icmlw-behavioral/)BibTeX
@inproceedings{xiao2024icmlw-behavioral,
title = {{Behavioral Bias of Vision-Language Models: A Behavioral Finance View}},
author = {Xiao, Yuhang and Yudilin, and Chiu, Ming-Chang},
booktitle = {ICML 2024 Workshops: LLMs_and_Cognition},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/xiao2024icmlw-behavioral/}
}