Probing Mechanical Reasoning in Large Vision Language Models

Abstract

Mechanical reasoning is a hallmark of human intelligence, defined by its ubiquitous yet irreplaceable role in human activities ranging from routine tasks to civil engineering. Embedding machines with mechanical reasoning is therefore an important step towards building human-level artificial intelligence. Here, we leveraged 155 cognitive experiments to test the understanding of system stability, gears and pulley systems, leverage principle, inertia and motion, and fluid mechanics in 26 Vision Language Models (VLMs). Results indicate that VLMs consistently perform worse than humans on all domains, while demonstrate significant difficulty in reasoning about gear systems and fluid mechanics. Notably, their performance on these tasks do not improve as number of parameters increase, suggesting that current attention-based architecture may fail to grasp certain underlying mechanisms required for mechanical reasoning, particularly those pertaining to mental simulations.

Cite

Text

Sun et al. "Probing Mechanical Reasoning in Large Vision Language Models." ICLR 2025 Workshops: Bi-Align, 2025.

Markdown

[Sun et al. "Probing Mechanical Reasoning in Large Vision Language Models." ICLR 2025 Workshops: Bi-Align, 2025.](https://mlanthology.org/iclrw/2025/sun2025iclrw-probing/)

BibTeX

@inproceedings{sun2025iclrw-probing,
  title     = {{Probing Mechanical Reasoning in Large Vision Language Models}},
  author    = {Sun, Haoran and Li, Yijiang and Gao, Qingying and Lyu, Haiyun and Luo, Dezhi and Deng, Hokin},
  booktitle = {ICLR 2025 Workshops: Bi-Align},
  year      = {2025},
  url       = {https://mlanthology.org/iclrw/2025/sun2025iclrw-probing/}
}