Strategist: Learning Strategic Skills by LLMs via Bi-Level Tree Search
Abstract
In this paper, we propose a new method Strategist that utilizes LLMs to acquire new skills for playing multi-agent games through a self-improvement process. Our method gathers quality feedback through self-play simulations with Monte Carlo tree search and LLM-based reflection, which can then be used to learn high-level strategic skills such as how to evaluate states that guide the low-level execution. We showcase how our method can be used in both action planning and dialogue generation in the context of games, achieving good performance on both tasks. Specifically, we demonstrate that our method can help train agents with better performance than both traditional reinforcement learning-based approaches and other LLM-based skill learning approaches in the games of Game of Pure Strategy (GOPS) and Resistance: Avalon.
Cite
Text
Light et al. "Strategist: Learning Strategic Skills by LLMs via Bi-Level Tree Search." ICML 2024 Workshops: AutoRL, 2024.Markdown
[Light et al. "Strategist: Learning Strategic Skills by LLMs via Bi-Level Tree Search." ICML 2024 Workshops: AutoRL, 2024.](https://mlanthology.org/icmlw/2024/light2024icmlw-strategist/)BibTeX
@inproceedings{light2024icmlw-strategist,
title = {{Strategist: Learning Strategic Skills by LLMs via Bi-Level Tree Search}},
author = {Light, Jonathan and Cai, Min and Chen, Weiqin and Wang, Guanzhi and Chen, Xiusi and Cheng, Wei and Yue, Yisong and Hu, Ziniu},
booktitle = {ICML 2024 Workshops: AutoRL},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/light2024icmlw-strategist/}
}