Deep RePReL--Combining Planning and Deep RL for Acting in Relational Domains

Abstract

We consider the problem of combining symbolic planning and deep reinforcement learning (RL) to achieve the best of both worlds -- the generalization ability of the planner with the effective learning ability of deep RL. To this effect, we extend a previous work of Kokel et al. 2021, RePReL, to deep RL. As we demonstrate in experiments in two relational worlds, this combined framework enables effective learning, transfer and generalization when compared to the use of an end-to-end deep RL framework.

Cite

Text

Kokel et al. "Deep RePReL--Combining Planning and Deep RL for Acting in Relational Domains." NeurIPS 2021 Workshops: DeepRL, 2021.

Markdown

[Kokel et al. "Deep RePReL--Combining Planning and Deep RL for Acting in Relational Domains." NeurIPS 2021 Workshops: DeepRL, 2021.](https://mlanthology.org/neuripsw/2021/kokel2021neuripsw-deep/)

BibTeX

@inproceedings{kokel2021neuripsw-deep,
  title     = {{Deep RePReL--Combining Planning and Deep RL for Acting in Relational Domains}},
  author    = {Kokel, Harsha and Manoharan, Arjun and Natarajan, Sriraam and Ravindran, Balaraman and Tadepalli, Prasad},
  booktitle = {NeurIPS 2021 Workshops: DeepRL},
  year      = {2021},
  url       = {https://mlanthology.org/neuripsw/2021/kokel2021neuripsw-deep/}
}