From Laws to Motivation: Guiding Exploration Through Law-Based Reasoning and Rewards

Abstract

Large Language Models (LLMs) and Reinforcement Learning (RL) are two powerful approaches for building autonomous agents. However, due to limited understanding of the game environment, agents often resort to inefficient exploration and trial-and-error, struggling to develop long-term strategies or make decisions. We propose a method that extracts experience from interaction records to model the underlying laws of the game environment, using these experience as internal motivation to guide agents. These experience, expressed in language, are highly flexible and can either assist agents in reasoning directly or be transformed into rewards for guiding training. Our evaluation results in $\texttt{Crafter}$ demonstrate that both RL and LLM agents benefit from these experience, leading to improved overall performance.

Cite

Text

Chen et al. "From Laws to Motivation: Guiding Exploration Through Law-Based Reasoning and Rewards." NeurIPS 2024 Workshops: IMOL, 2024.

Markdown

[Chen et al. "From Laws to Motivation: Guiding Exploration Through Law-Based Reasoning and Rewards." NeurIPS 2024 Workshops: IMOL, 2024.](https://mlanthology.org/neuripsw/2024/chen2024neuripsw-laws/)

BibTeX

@inproceedings{chen2024neuripsw-laws,
  title     = {{From Laws to Motivation: Guiding Exploration Through Law-Based Reasoning and Rewards}},
  author    = {Chen, Ziyu and Xiao, Zhiqing and Jiang, Xinbei and Zhao, Junbo},
  booktitle = {NeurIPS 2024 Workshops: IMOL},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/chen2024neuripsw-laws/}
}