Learning and Exploiting Progress States in Greedy Best-First Search
Abstract
Previous work introduced the concept of progress states. After expanding a progress state, a greedy best-first search (GBFS) will only expand states with lower heuristic values. Current methods can identify progress states only for a single task and only after a solution for the task has been found. We introduce a novel approach that learns a description logic formula characterizing all progress states in a classical planning domain. Using the learned formulas in a GBFS to break ties in favor of progress states often significantly reduces the search effort.
Cite
Text
Ferber et al. "Learning and Exploiting Progress States in Greedy Best-First Search." International Joint Conference on Artificial Intelligence, 2022. doi:10.24963/IJCAI.2022/657Markdown
[Ferber et al. "Learning and Exploiting Progress States in Greedy Best-First Search." International Joint Conference on Artificial Intelligence, 2022.](https://mlanthology.org/ijcai/2022/ferber2022ijcai-learning/) doi:10.24963/IJCAI.2022/657BibTeX
@inproceedings{ferber2022ijcai-learning,
title = {{Learning and Exploiting Progress States in Greedy Best-First Search}},
author = {Ferber, Patrick and Cohen, Liat and Seipp, Jendrik and Keller, Thomas},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2022},
pages = {4740-4746},
doi = {10.24963/IJCAI.2022/657},
url = {https://mlanthology.org/ijcai/2022/ferber2022ijcai-learning/}
}