ASNets: Deep Learning for Generalised Planning
Abstract
In this paper, we discuss the learning of generalised policies for probabilistic and classical planning problems using Action Schema Networks (ASNets). The ASNet is a neural network architecture that exploits the relational structure of (P)PDDL planning problems to learn a common set of weights that can be applied to any problem in a domain. By mimicking the actions chosen by a traditional, non-learning planner on a handful of small problems in a domain, ASNets are able to learn a generalised reactive policy that can quickly solve much larger instances from the domain. This work extends the ASNet architecture to make it more expressive, while still remaining invariant to a range of symmetries that exist in PPDDL problems. We also present a thorough experimental evaluation of ASNets, including a comparison with heuristic search planners on seven probabilistic and deterministic domains, an extended evaluation on over 18,000 Blocksworld instances, and an ablation study. Finally, we show that sparsity-inducing regularisation can produce ASNets that are compact enough for humans to understand, yielding insights into how the structure of ASNets allows them to generalise across a domain.
Cite
Text
Toyer et al. "ASNets: Deep Learning for Generalised Planning." Journal of Artificial Intelligence Research, 2020. doi:10.1613/JAIR.1.11633Markdown
[Toyer et al. "ASNets: Deep Learning for Generalised Planning." Journal of Artificial Intelligence Research, 2020.](https://mlanthology.org/jair/2020/toyer2020jair-asnets/) doi:10.1613/JAIR.1.11633BibTeX
@article{toyer2020jair-asnets,
title = {{ASNets: Deep Learning for Generalised Planning}},
author = {Toyer, Sam and Thiébaux, Sylvie and Trevizan, Felipe W. and Xie, Lexing},
journal = {Journal of Artificial Intelligence Research},
year = {2020},
pages = {1-68},
doi = {10.1613/JAIR.1.11633},
volume = {68},
url = {https://mlanthology.org/jair/2020/toyer2020jair-asnets/}
}