From Words to Worlds: Compositionality for Cognitive Architectures
Abstract
Large language models (LLMs) are very performant connectionist systems, but do they exhibit more compositionality? More importantly, is that part of why they perform so well? We present empirical analyses across four LLM families (12 models) and three task categories, including a novel task introduced below. Our findings reveal a nuanced relationship in learning of compositional strategies by LLMs -- while scaling enhances compositional abilities, instruction tuning often has a reverse effect. Such disparity brings forth some open issues regarding the development and improvement of large language models in alignment with human cognitive capacities.
Cite
Text
Dhar and Søgaard. "From Words to Worlds: Compositionality for Cognitive Architectures." ICML 2024 Workshops: LLMs_and_Cognition, 2024.Markdown
[Dhar and Søgaard. "From Words to Worlds: Compositionality for Cognitive Architectures." ICML 2024 Workshops: LLMs_and_Cognition, 2024.](https://mlanthology.org/icmlw/2024/dhar2024icmlw-words/)BibTeX
@inproceedings{dhar2024icmlw-words,
title = {{From Words to Worlds: Compositionality for Cognitive Architectures}},
author = {Dhar, Ruchira and Søgaard, Anders},
booktitle = {ICML 2024 Workshops: LLMs_and_Cognition},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/dhar2024icmlw-words/}
}