DeepThought: An Architecture for Autonomous Self-Motivated Systems

Abstract

The ability of large language models (LLMs) to engage in credible dialogues with humans, taking into account the training data and the context of the conversation, has raised discussions about their ability to exhibit intrinsic motivations, agency, or even some degree of consciousness. We argue that the internal architecture of LLMs and their finite and volatile state cannot support any of these properties. By combining insights from complementary learning systems, global neuronal workspace, and attention schema theories, we propose to integrate LLMs and other deep learning systems into an architecture for cognitive language agents able to exhibit properties akin to agency, self-motivation, even some features of meta-cognition.

Cite

Text

Oliveira et al. "DeepThought: An Architecture for Autonomous Self-Motivated Systems." NeurIPS 2023 Workshops: IMOL, 2023.

Markdown

[Oliveira et al. "DeepThought: An Architecture for Autonomous Self-Motivated Systems." NeurIPS 2023 Workshops: IMOL, 2023.](https://mlanthology.org/neuripsw/2023/oliveira2023neuripsw-deepthought/)

BibTeX

@inproceedings{oliveira2023neuripsw-deepthought,
  title     = {{DeepThought: An Architecture for Autonomous Self-Motivated Systems}},
  author    = {Oliveira, Arlindo and Domingos, Tiago and Figueiredo, Mario and Lima, Pedro},
  booktitle = {NeurIPS 2023 Workshops: IMOL},
  year      = {2023},
  url       = {https://mlanthology.org/neuripsw/2023/oliveira2023neuripsw-deepthought/}
}