Dr. Strategy: Model-Based Generalist Agents with Strategic Dreaming
Abstract
Model-based reinforcement learning (MBRL) has been a primary approach to ameliorating the sample efficiency issue as well as to make a generalist agent. However, there has not been much effort toward enhancing the strategy of dreaming itself. Therefore, it is a question whether and how an agent can “dream better” in a more structured and strategic way. In this paper, inspired by the observation from cognitive science suggesting that humans use a spatial divide-and-conquer strategy in planning, we propose a new MBRL agent, called Dr. Strategy, which is equipped with a novel Dreaming Strategy. The proposed agent realizes a version of divide-and-conquer-like strategy in dreaming. This is achieved by learning a set of latent landmarks and then utilizing these to learn a landmark-conditioned highway policy. With the highway policy, the agent can first learn in the dream to move to a landmark, and from there it tackles the exploration and achievement task in a more focused way. In experiments, we show that the proposed model outperforms prior pixel-based MBRL methods in various visually complex and partially observable navigation tasks.
Cite
Text
Hamed et al. "Dr. Strategy: Model-Based Generalist Agents with Strategic Dreaming." International Conference on Machine Learning, 2024.Markdown
[Hamed et al. "Dr. Strategy: Model-Based Generalist Agents with Strategic Dreaming." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/hamed2024icml-dr/)BibTeX
@inproceedings{hamed2024icml-dr,
title = {{Dr. Strategy: Model-Based Generalist Agents with Strategic Dreaming}},
author = {Hamed, Hany and Kim, Subin and Kim, Dongyeong and Yoon, Jaesik and Ahn, Sungjin},
booktitle = {International Conference on Machine Learning},
year = {2024},
pages = {17333-17353},
volume = {235},
url = {https://mlanthology.org/icml/2024/hamed2024icml-dr/}
}