Does GPT Really Get It? a Hierarchical Scale to Quantify Human and AI's Understanding of Algorithms
Abstract
As Large Language Models (LLMs) are used for increasingly complex cognitive tasks, a natural question is whether AI really understands. The study of understanding in LLMs is in its infancy, and the community has yet to incorporate research and insights from philosophy, psychology, and education. Here we focus on understanding algorithms, and propose a hierarchy of levels of understanding. We validate the hierarchy using a study with human subjects (undergraduate and graduate students). Following this, we apply the hierarchy to large language models (generations of GPT), revealing interesting similarities and differences with humans. We expect that our rigorous criteria for algorithm understanding will help monitor and quantify AI's progress in such cognitive domains.
Cite
Text
Reid and Vempala. "Does GPT Really Get It? a Hierarchical Scale to Quantify Human and AI's Understanding of Algorithms." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I2.32140Markdown
[Reid and Vempala. "Does GPT Really Get It? a Hierarchical Scale to Quantify Human and AI's Understanding of Algorithms." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/reid2025aaai-gpt/) doi:10.1609/AAAI.V39I2.32140BibTeX
@inproceedings{reid2025aaai-gpt,
title = {{Does GPT Really Get It? a Hierarchical Scale to Quantify Human and AI's Understanding of Algorithms}},
author = {Reid, Mirabel and Vempala, Santosh S.},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2025},
pages = {1492-1500},
doi = {10.1609/AAAI.V39I2.32140},
url = {https://mlanthology.org/aaai/2025/reid2025aaai-gpt/}
}