Do Large Language Models Truly Understand Geometric Structures?
Abstract
Geometric ability is a significant challenge for large language models (LLMs) due to the need for advanced spatial comprehension and abstract thinking. Existing datasets primarily evaluate LLMs on their final answers, but they cannot truly measure their true understanding of geometric structures, as LLMs can arrive at correct answers by coincidence. To fill this gap, we introduce the GeomRel dataset, designed to evaluate LLMs’ understanding of geometric structures by isolating the core step of geometric relationship identification in problem-solving. Using this benchmark, we conduct thorough evaluations of diverse LLMs and identify key limitations in understanding geometric structures. We further propose the Geometry Chain-of-Thought (GeoCoT) method, which enhances LLMs’ ability to identify geometric relationships, resulting in significant performance improvements.
Cite
Text
Wang et al. "Do Large Language Models Truly Understand Geometric Structures?." International Conference on Learning Representations, 2025.Markdown
[Wang et al. "Do Large Language Models Truly Understand Geometric Structures?." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/wang2025iclr-large/)BibTeX
@inproceedings{wang2025iclr-large,
title = {{Do Large Language Models Truly Understand Geometric Structures?}},
author = {Wang, Xiaofeng and Wang, Yiming and Zhu, Wenhong and Wang, Rui},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/wang2025iclr-large/}
}