Minimax Tree of Thoughts: Playing Two-Player Zero-Sum Sequential Games with Large Language Models
Abstract
Large language models are being used to solve an increasing number of tasks, but the existing methods based on large language models are still not good enough in playing two-player zero-sum sequential games. In order to solve the related challenges of large language models playing two-player zero-sum sequential games, we propose Minimax Tree of Thoughts, which combines the idea of Tree of Thoughts and minimax search. Experiment results show that our Minimax Tree of Thoughts method significantly outperforms the original Tree of Thoughts method in two-player zero-sum sequential games tasks such as Word chain and game of Ghost.
Cite
Text
Guo et al. "Minimax Tree of Thoughts: Playing Two-Player Zero-Sum Sequential Games with Large Language Models." ICML 2024 Workshops: LLMs_and_Cognition, 2024.Markdown
[Guo et al. "Minimax Tree of Thoughts: Playing Two-Player Zero-Sum Sequential Games with Large Language Models." ICML 2024 Workshops: LLMs_and_Cognition, 2024.](https://mlanthology.org/icmlw/2024/guo2024icmlw-minimax/)BibTeX
@inproceedings{guo2024icmlw-minimax,
title = {{Minimax Tree of Thoughts: Playing Two-Player Zero-Sum Sequential Games with Large Language Models}},
author = {Guo, Wei and Hao, Xiaotian and Hao, Jianye and Zheng, Yan},
booktitle = {ICML 2024 Workshops: LLMs_and_Cognition},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/guo2024icmlw-minimax/}
}