Does GPT Really Get It? a Hierarchical Scale to Quantify Human and AI's Understanding of Algorithms

Abstract

As Large Language Models (LLMs) are used for increasingly complex cognitive tasks, a natural question is whether AI really {\em understands}. The study of understanding in LLMs is in its infancy, and the community has yet to incorporate research and insights from philosophy, psychology, and education. Here we focus on understanding {\em algorithms}, and propose a hierarchy of levels of understanding. We validate the hierarchy using a study with human subjects (undergraduate and graduate students). Following this, we apply the hierarchy to large language models (generations of GPT), revealing interesting similarities and differences with humans. We expect that our rigorous criteria for algorithm understanding will help monitor and quantify AI's progress in such cognitive domains.

Cite

Text

Reid and Vempala. "Does GPT Really Get It? a Hierarchical Scale to Quantify Human and AI's Understanding of Algorithms." NeurIPS 2024 Workshops: Behavioral_ML, 2024.

Markdown

[Reid and Vempala. "Does GPT Really Get It? a Hierarchical Scale to Quantify Human and AI's Understanding of Algorithms." NeurIPS 2024 Workshops: Behavioral_ML, 2024.](https://mlanthology.org/neuripsw/2024/reid2024neuripsw-gpt/)

BibTeX

@inproceedings{reid2024neuripsw-gpt,
  title     = {{Does GPT Really Get It? a Hierarchical Scale to Quantify Human and AI's Understanding of Algorithms}},
  author    = {Reid, Mirabel and Vempala, Santosh},
  booktitle = {NeurIPS 2024 Workshops: Behavioral_ML},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/reid2024neuripsw-gpt/}
}