Optimize Planning Heuristics to Rank, Not to Estimate Cost-to-Goal

Abstract

In imitation learning for planning, parameters of heuristic functions are optimized against a set of solved problem instances. This work revisits the necessary and sufficient conditions of strictly optimally efficient heuristics for forward search algorithms, mainly A* and greedy best-first search, which expand only states on the returned optimal path. It then proposes a family of loss functions based on ranking tailored for a given variant of the forward search algorithm. Furthermore, from a learning theory point of view, it discusses why optimizing cost-to-goal h* is unnecessarily difficult. The experimental comparison on a diverse set of problems unequivocally supports the derived theory.

Cite

Text

Chrestien et al. "Optimize Planning Heuristics to Rank, Not to Estimate Cost-to-Goal." Neural Information Processing Systems, 2023.

Markdown

[Chrestien et al. "Optimize Planning Heuristics to Rank, Not to Estimate Cost-to-Goal." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/chrestien2023neurips-optimize/)

BibTeX

@inproceedings{chrestien2023neurips-optimize,
  title     = {{Optimize Planning Heuristics to Rank, Not to Estimate Cost-to-Goal}},
  author    = {Chrestien, Leah and Edelkamp, Stefan and Komenda, Antonin and Pevny, Tomas},
  booktitle = {Neural Information Processing Systems},
  year      = {2023},
  url       = {https://mlanthology.org/neurips/2023/chrestien2023neurips-optimize/}
}