Context-Sensitive Semantic Reasoning in Large Language Models
Abstract
The development of large language models (LLMs) holds promise for increasing the scale and breadth of experiments probing human cognition. LLMs will be useful for studying the human mind to the extent that their behaviors and their representations are aligned with humans. Here we test this alignment by mea- suring the degree to which LLMs reproduce the context-sensitivity demonstrated by humans in semantic reasoning tasks. We show in two simulations that, like humans, the behavior of leading LLMs is sensitive to both local context and task context, reasoning about the same item differently when it is presented in different contexts or tasks. However, the representations derived from LLM text embedding models do not exhibit the same degree of context sensitivity. These results suggest that LLMs may provide useful models of context-dependent human behavior, but cognitive scientists should be cautious when assuming that embeddings reflect the same context sensitivity.
Cite
Text
Giallanza and Campbell. "Context-Sensitive Semantic Reasoning in Large Language Models." ICLR 2024 Workshops: Re-Align, 2024.Markdown
[Giallanza and Campbell. "Context-Sensitive Semantic Reasoning in Large Language Models." ICLR 2024 Workshops: Re-Align, 2024.](https://mlanthology.org/iclrw/2024/giallanza2024iclrw-contextsensitive/)BibTeX
@inproceedings{giallanza2024iclrw-contextsensitive,
title = {{Context-Sensitive Semantic Reasoning in Large Language Models}},
author = {Giallanza, Tyler and Campbell, Declan Iain},
booktitle = {ICLR 2024 Workshops: Re-Align},
year = {2024},
url = {https://mlanthology.org/iclrw/2024/giallanza2024iclrw-contextsensitive/}
}