Predictive Minds: LLMs as Atypical Active Inference Agents

Abstract

Large language models (LLMs) like GPT are often conceptualized as passive predictors, simulators, or even 'stochastic parrots'. We instead conceptualize LLMs by drawing on the theory of active inference originating in cognitive science and neuroscience. We examine similarities and differences between traditional active inference systems and LLMs, leading to the conclusion that, currently, LLMs lack a tight feedback loop between acting in the world and perceiving the impacts of their actions, but otherwise fit in the active inference paradigm. We list reasons why this loop may soon be closed, and possible consequences of this including enhanced model self-awareness and the drive to minimize prediction error by changing the world.

Cite

Text

Kulveit. "Predictive Minds: LLMs as Atypical Active Inference Agents." NeurIPS 2023 Workshops: SoLaR, 2023.

Markdown

[Kulveit. "Predictive Minds: LLMs as Atypical Active Inference Agents." NeurIPS 2023 Workshops: SoLaR, 2023.](https://mlanthology.org/neuripsw/2023/kulveit2023neuripsw-predictive/)

BibTeX

@inproceedings{kulveit2023neuripsw-predictive,
  title     = {{Predictive Minds: LLMs as Atypical Active Inference Agents}},
  author    = {Kulveit, Jan},
  booktitle = {NeurIPS 2023 Workshops: SoLaR},
  year      = {2023},
  url       = {https://mlanthology.org/neuripsw/2023/kulveit2023neuripsw-predictive/}
}