Using Adaptive Intrinsic Motivation in RL to Model Learning Across Development

Abstract

Reinforcement learning is a powerful model of animal learning in brief, controlled experimental conditions, but does not readily explain the development of behavior over an animal's whole lifetime. In this paper, we describe a framework to address this shortcoming by introducing the single-life reinforcement learning setting to cognitive science. We construct an agent with two learning systems: an extrinsic learner that learns within a single lifetime, and an intrinsic learner that learns across lifetimes, equipping the agent with intrinsic motivation. We show that this model outperforms heuristic benchmarks and recapitulates a transition from exploratory to habit-driven behavior, while allowing the agent to learn an interpretable value function. We formulate a precise definition of intrinsic motivation and discuss the philosophical implications of using reinforcement learning as a model of behavior in the real world.

Cite

Text

Sandbrink et al. "Using Adaptive Intrinsic Motivation in RL to Model Learning Across Development." NeurIPS 2024 Workshops: IMOL, 2024.

Markdown

[Sandbrink et al. "Using Adaptive Intrinsic Motivation in RL to Model Learning Across Development." NeurIPS 2024 Workshops: IMOL, 2024.](https://mlanthology.org/neuripsw/2024/sandbrink2024neuripsw-using/)

BibTeX

@inproceedings{sandbrink2024neuripsw-using,
  title     = {{Using Adaptive Intrinsic Motivation in RL to Model Learning Across Development}},
  author    = {Sandbrink, Kai Jappe and Christian, Brian and Nasvytis, Linas and de Witt, Christian Schroeder and Butlin, Patrick},
  booktitle = {NeurIPS 2024 Workshops: IMOL},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/sandbrink2024neuripsw-using/}
}