Cognitive Modeling with Scaffolded LLMs: A Case Study of Referential Expression Generation

Abstract

To what extent can LLMs be used as part of a cognitive model of language generation? In this paper, we approach this question by exploring a neuro-symbolic implementation of an algorithmic cognitive model of referential expression generation by Dale & Reiter (1995). The symbolic task analysis implementing the generation as an iterative procedure scaffolds symbolic and gpt-3.5-turbo-based modules. We compare this implementation to an ablated model and a one-shot LLM-only baseline on the A3DS dataset (Tsvilodub & Franke, 2023) and find that our hybrid approach is at the same time cognitively plausible and performs well in complex contexts, while allowing for more open-ended modeling of language generation in a larger domain.

Cite

Text

Tsvilodub et al. "Cognitive Modeling with Scaffolded LLMs: A Case Study of Referential Expression Generation." ICML 2024 Workshops: LLMs_and_Cognition, 2024.

Markdown

[Tsvilodub et al. "Cognitive Modeling with Scaffolded LLMs: A Case Study of Referential Expression Generation." ICML 2024 Workshops: LLMs_and_Cognition, 2024.](https://mlanthology.org/icmlw/2024/tsvilodub2024icmlw-cognitive/)

BibTeX

@inproceedings{tsvilodub2024icmlw-cognitive,
  title     = {{Cognitive Modeling with Scaffolded LLMs: A Case Study of Referential Expression Generation}},
  author    = {Tsvilodub, Polina and Franke, Michael and Carcassi, Fausto},
  booktitle = {ICML 2024 Workshops: LLMs_and_Cognition},
  year      = {2024},
  url       = {https://mlanthology.org/icmlw/2024/tsvilodub2024icmlw-cognitive/}
}