Expansive Latent Planning for Sparse Reward Offline Reinforcement Learning
Abstract
Sampling-based motion planning algorithms excel at searching global solution paths in geometrically complex settings. However, classical approaches, such as RRT, are difficult to scale beyond low-dimensional search spaces and rely on privileged knowledge e.g. about collision detection and underlying state distances. In this work, we take a step towards the integration of sampling-based planning into the reinforcement learning framework to solve sparse-reward control tasks from high-dimensional inputs. Our method, called VELAP, determines sequences of waypoints through sampling-based exploration in a learned state embedding. Unlike other sampling-based techniques, we iteratively expand a tree-based memory of visited latent areas, which is leveraged to explore a larger portion of the latent space for a given number of search iterations. We demonstrate state-of-the-art results in learning control from offline data in the context of vision-based manipulation under sparse reward feedback. Our method extends the set of available planning tools in model-based reinforcement learning by adding a latent planner that searches globally for feasible paths instead of being bound to a fixed prediction horizon.
Cite
Text
Gieselmann and Pokorny. "Expansive Latent Planning for Sparse Reward Offline Reinforcement Learning." Conference on Robot Learning, 2023.Markdown
[Gieselmann and Pokorny. "Expansive Latent Planning for Sparse Reward Offline Reinforcement Learning." Conference on Robot Learning, 2023.](https://mlanthology.org/corl/2023/gieselmann2023corl-expansive-a/)BibTeX
@inproceedings{gieselmann2023corl-expansive-a,
title = {{Expansive Latent Planning for Sparse Reward Offline Reinforcement Learning}},
author = {Gieselmann, Robert and Pokorny, Florian T.},
booktitle = {Conference on Robot Learning},
year = {2023},
pages = {1-22},
volume = {229},
url = {https://mlanthology.org/corl/2023/gieselmann2023corl-expansive-a/}
}