Can Large Language Models Derive High-Level Cognition from Low-Level and Fragmented Foundational Information?

Abstract

As one of the key technologies leading to Artificial General Intelligence (AGI), Large Language Models (LLMs) have achieved remarkable accomplishments. Exploring the capabilities of LLMs is crucial for scientific research, and many studies propose new challenges from various aspects to explore the boundaries of capabilities in LLMs. This paper attempts to push the challenges of information understanding, synthesizing and reasoning to the extreme, in order to explore the boundaries of more advanced dimensional cognitive capabilities in LLMs. It is defined as the task of High-Level Cognition (HLC), which involves obtaining high-level conclusions from low-level and fragmented foundational information. To evaluate HLC, we construct a dataset based on soccer matches. Experiments and analysis on this dataset show that current state-of-the-art LLMs lack the ability to effectively solve the task of HLC, because their performance is equivalent to random-level. However, by fine-tuning Llama3-8B-Instruct, there are improvements of 14.4%, 48.1%, and 19.4% over random-level in three types of evaluation tasks. This indicates that LLMs have great potential to solve the task of HLC.

Cite

Text

Liu et al. "Can Large Language Models Derive High-Level Cognition from Low-Level and Fragmented Foundational Information?." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I23.34648

Markdown

[Liu et al. "Can Large Language Models Derive High-Level Cognition from Low-Level and Fragmented Foundational Information?." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/liu2025aaai-large/) doi:10.1609/AAAI.V39I23.34648

BibTeX

@inproceedings{liu2025aaai-large,
  title     = {{Can Large Language Models Derive High-Level Cognition from Low-Level and Fragmented Foundational Information?}},
  author    = {Liu, Yang and Wang, Xiaoping and Lu, Kai},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {24677-24685},
  doi       = {10.1609/AAAI.V39I23.34648},
  url       = {https://mlanthology.org/aaai/2025/liu2025aaai-large/}
}