PreND: Enhancing Intrinsic Motivation in Reinforcement Learning Through Pre-Trained Network Distillation

Abstract

Intrinsic motivation, inspired by the psychology of developmental learning in infants, stimulates exploration in agents without relying solely on sparse external rewards. Existing methods in reinforcement learning like Random Network Distillation (RND) face significant limitations, including (1) relying on raw visual inputs, leading to a lack of meaningful representations, (2) the inability to build a robust latent space, (3) poor target network initialization and (4) rapid degradation of intrinsic rewards. In this paper, we introduce ***Pre**-trained **N**etwork **D**istillation* (**PreND**), a novel approach to enhance intrinsic motivation in reinforcement learning (RL) by improving upon the widely used prediction-based method, RND. **PreND** addresses these challenges by incorporating pre-trained representation models into both the target and predictor networks, resulting in more meaningful and stable intrinsic rewards, while enhancing the representation learned by the model. We also tried simple but effective variants of the predictor network optimization by controlling the learning rate. Through experiments on the Atari domain, we demonstrate that **PreND** significantly outperforms RND, offering a more robust intrinsic motivation signal that leads to better exploration, improving overall performance and sample efficiency. This research highlights the importance of target and predictor networks representation in prediction-based intrinsic motivation, setting a new direction for improving RL agents' learning efficiency in sparse reward environments.

Cite

Text

Davoodabadi et al. "PreND: Enhancing Intrinsic Motivation in Reinforcement Learning Through Pre-Trained Network Distillation." NeurIPS 2024 Workshops: IMOL, 2024.

Markdown

[Davoodabadi et al. "PreND: Enhancing Intrinsic Motivation in Reinforcement Learning Through Pre-Trained Network Distillation." NeurIPS 2024 Workshops: IMOL, 2024.](https://mlanthology.org/neuripsw/2024/davoodabadi2024neuripsw-prend/)

BibTeX

@inproceedings{davoodabadi2024neuripsw-prend,
  title     = {{PreND: Enhancing Intrinsic Motivation in Reinforcement Learning Through Pre-Trained Network Distillation}},
  author    = {Davoodabadi, Amin and Dijujin, Negin Hashemi and Baghshah, Mahdieh Soleymani},
  booktitle = {NeurIPS 2024 Workshops: IMOL},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/davoodabadi2024neuripsw-prend/}
}