How to Reduce Action Space for Planning Domains? (Student Abstract)
Abstract
While AI planning and Reinforcement Learning (RL) solve sequential decision-making problems, they are based on different formalisms, which leads to a significant difference in their action spaces. When solving planning problems using RL algorithms, we have observed that a naive translation of the planning action space incurs severe degradation in sample complexity. In practice, those action spaces are often engineered manually in a domain-specific manner. In this abstract, we present a method that reduces the parameters of operators in AI planning domains by introducing a parameter seed set problem and casting it as a classical planning task. Our experiment shows that our proposed method significantly reduces the number of actions in the RL environments originating from AI planning domains.
Cite
Text
Kokel et al. "How to Reduce Action Space for Planning Domains? (Student Abstract)." AAAI Conference on Artificial Intelligence, 2022. doi:10.1609/AAAI.V36I11.21631Markdown
[Kokel et al. "How to Reduce Action Space for Planning Domains? (Student Abstract)." AAAI Conference on Artificial Intelligence, 2022.](https://mlanthology.org/aaai/2022/kokel2022aaai-reduce/) doi:10.1609/AAAI.V36I11.21631BibTeX
@inproceedings{kokel2022aaai-reduce,
title = {{How to Reduce Action Space for Planning Domains? (Student Abstract)}},
author = {Kokel, Harsha and Lee, Junkyu and Katz, Michael and Sohrabi, Shirin and Srinivas, Kavitha},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2022},
pages = {12989-12990},
doi = {10.1609/AAAI.V36I11.21631},
url = {https://mlanthology.org/aaai/2022/kokel2022aaai-reduce/}
}