Learning Exploration Policies for Navigation
Abstract
Numerous past works have tackled the problem of task-driven navigation. But, how to effectively explore a new environment to enable a variety of down-stream tasks has received much less attention. In this work, we study how agents can autonomously explore realistic and complex 3D environments without the context of task-rewards. We propose a learning-based approach and investigate different policy architectures, reward functions, and training paradigms. We find that use of policies with spatial memory that are bootstrapped with imitation learning and finally finetuned with coverage rewards derived purely from on-board sensors can be effective at exploring novel environments. We show that our learned exploration policies can explore better than classical approaches based on geometry alone and generic learning-based exploration techniques. Finally, we also show how such task-agnostic exploration can be used for down-stream tasks. Videos are available at https://sites.google.com/view/exploration-for-nav/.
Cite
Text
Chen et al. "Learning Exploration Policies for Navigation." International Conference on Learning Representations, 2019.Markdown
[Chen et al. "Learning Exploration Policies for Navigation." International Conference on Learning Representations, 2019.](https://mlanthology.org/iclr/2019/chen2019iclr-learning/)BibTeX
@inproceedings{chen2019iclr-learning,
title = {{Learning Exploration Policies for Navigation}},
author = {Chen, Tao and Gupta, Saurabh and Gupta, Abhinav},
booktitle = {International Conference on Learning Representations},
year = {2019},
url = {https://mlanthology.org/iclr/2019/chen2019iclr-learning/}
}