Beyond A*: Better Planning with Transformers via Search Dynamics Bootstrapping
Abstract
While Transformers have enabled tremendous progress in various application settings, such architectures still lag behind traditional symbolic planners for solving complex decision making tasks. In this work, we demonstrate how to train Transformers to solve complex planning tasks. This is accomplished by training an encoder-decoder Transformer model to predict the _search dynamics_ of the $A^*$ search algorithm. We fine tune this model to obtain a _Searchformer_, a Transformer model that optimally solves previously unseen Sokoban puzzles 93.7\% of the time, while using up to 26.8\% fewer search steps than the $A^*$ implementation that was used for training initially. In our training method, $A^*$'s search dynamics are expressed as a token sequence outlining when task states are added and removed into the search tree during symbolic planning. Searchformer significantly outperforms baselines that predict the optimal plan directly with a 5--10$\times$ smaller model size and a 10$\times$ smaller training dataset. Lastly, we demonstrate how Searchformer scales to larger and more complex decision making tasks with improved percentage of solved tasks and shortened search dynamics.
Cite
Text
Lehnert et al. "Beyond A*: Better Planning with Transformers via Search Dynamics Bootstrapping." ICLR 2024 Workshops: LLMAgents, 2024.Markdown
[Lehnert et al. "Beyond A*: Better Planning with Transformers via Search Dynamics Bootstrapping." ICLR 2024 Workshops: LLMAgents, 2024.](https://mlanthology.org/iclrw/2024/lehnert2024iclrw-beyond/)BibTeX
@inproceedings{lehnert2024iclrw-beyond,
title = {{Beyond A*: Better Planning with Transformers via Search Dynamics Bootstrapping}},
author = {Lehnert, Lucas and Sukhbaatar, Sainbayar and McVay, Paul and Rabbat, Michael and Tian, Yuandong},
booktitle = {ICLR 2024 Workshops: LLMAgents},
year = {2024},
url = {https://mlanthology.org/iclrw/2024/lehnert2024iclrw-beyond/}
}