Rapid Exploration for Open-World Navigation with Latent Goal Models
Abstract
We describe a robotic learning system for autonomous exploration and navigation in diverse, open-world environments. At the core of our method is a learned latent variable model of distances and actions, along with a non-parametric topological memory of images. We use an information bottleneck to regularize the learned policy, giving us (i) a compact visual representation of goals, (ii) improved generalization capabilities, and (iii) a mechanism for sampling feasible goals for exploration. Trained on a large offline dataset of prior experience, the model acquires a representation of visual goals that is robust to task-irrelevant distractors. We demonstrate our method on a mobile ground robot in open-world exploration scenarios. Given an image of a goal that is up to 80 meters away, our method leverages its representation to explore and discover the goal in under 20 minutes, even amidst previously-unseen obstacles and weather conditions.
Cite
Text
Shah et al. "Rapid Exploration for Open-World Navigation with Latent Goal Models." Conference on Robot Learning, 2021.Markdown
[Shah et al. "Rapid Exploration for Open-World Navigation with Latent Goal Models." Conference on Robot Learning, 2021.](https://mlanthology.org/corl/2021/shah2021corl-rapid/)BibTeX
@inproceedings{shah2021corl-rapid,
title = {{Rapid Exploration for Open-World Navigation with Latent Goal Models}},
author = {Shah, Dhruv and Eysenbach, Benjamin and Rhinehart, Nicholas and Levine, Sergey},
booktitle = {Conference on Robot Learning},
year = {2021},
pages = {674-684},
volume = {164},
url = {https://mlanthology.org/corl/2021/shah2021corl-rapid/}
}