Backward Learning for Goal-Conditioned Policies
Abstract
Can we learn policies in reinforcement learning without rewards? Can we learn a policy just by trying to reach a goal state? We answer these questions positively by proposing a multi-step procedure that first learns a world model that goes backward in time, secondly generates goal-reaching backward trajectories, thirdly improves those sequences using shortest path finding algorithms, and finally trains a neural network policy by imitation learning. We evaluate our method on a deterministic maze environment where the observations are $64\times 64$ pixel bird's eye images and can show that it consistently reaches several goals.
Cite
Text
Höftmann et al. "Backward Learning for Goal-Conditioned Policies." NeurIPS 2023 Workshops: GCRL, 2023.Markdown
[Höftmann et al. "Backward Learning for Goal-Conditioned Policies." NeurIPS 2023 Workshops: GCRL, 2023.](https://mlanthology.org/neuripsw/2023/hoftmann2023neuripsw-backward/)BibTeX
@inproceedings{hoftmann2023neuripsw-backward,
title = {{Backward Learning for Goal-Conditioned Policies}},
author = {Höftmann, Marc and Robine, Jan and Harmeling, Stefan},
booktitle = {NeurIPS 2023 Workshops: GCRL},
year = {2023},
url = {https://mlanthology.org/neuripsw/2023/hoftmann2023neuripsw-backward/}
}