Vision Language Models Know Law of Conservation Without Understanding More-or-Less
Abstract
Understanding law of conservation is a critical milestone in human cognitive development considered to be supported by the apprehension of quantitative concepts and the reversibility of operations. To assess whether this critical component of human intelligence has emerged in Vision Language Models, we have curated the ConserveBench, a battery of 365 cognitive experiments across four dimensions of physical quantities: volume, solid quantity, length, and number. The former two involve transformational tasks which require reversibility understanding. The latter two involve non-transformational tasks which assess quantity understanding. Surprisingly, we find that while Vision Language Models are generally good at transformational tasks, they tend to fail at non-transformational tasks. There is a dissociation between understanding the reversibility of operations and understanding the concept of quantity, which both are believed to be the cornerstones of understanding law of conservation in humans.
Cite
Text
Luo et al. "Vision Language Models Know Law of Conservation Without Understanding More-or-Less." ICLR 2025 Workshops: Bi-Align, 2025.Markdown
[Luo et al. "Vision Language Models Know Law of Conservation Without Understanding More-or-Less." ICLR 2025 Workshops: Bi-Align, 2025.](https://mlanthology.org/iclrw/2025/luo2025iclrw-vision/)BibTeX
@inproceedings{luo2025iclrw-vision,
title = {{Vision Language Models Know Law of Conservation Without Understanding More-or-Less}},
author = {Luo, Dezhi and Lyu, Haiyun and Gao, Qingying and Sun, Haoran and Li, Yijiang and Deng, Hokin},
booktitle = {ICLR 2025 Workshops: Bi-Align},
year = {2025},
url = {https://mlanthology.org/iclrw/2025/luo2025iclrw-vision/}
}