Auxiliary Tasks Speed up Learning Point Goal Navigation

Abstract

PointGoal Navigation is an embodied task that requires agents to navigate to a specified point in an unseen environment. Wijmans et al. showed that this task is solvable in simulation but their method is computationally prohibitive - requiring 2.5 billion frames of experience and 180 GPU-days. We develop a method to significantly improve sample efficiency in learning PointNav using self-supervised auxiliary tasks (e.g. predicting the action taken between two egocentric observations, predicting the distance between two observations from a trajectory, etc.). We find that naively combining multiple auxiliary tasks improves sample efficiency, but only provides marginal gains beyond a point. To overcome this, we use attention to combine representations from individual auxiliary tasks. Our best agent is 5.5x faster to match the performance of the previous state-of-the-art, DD-PPO, at 40M frames, and improves on DD-PPO’s performance at 40M frames by 0.16 SPL. Our code is publicly available at github.com/joel99/habitat-pointnav-aux.

Cite

Text

Ye et al. "Auxiliary Tasks Speed up Learning Point Goal Navigation." Conference on Robot Learning, 2020.

Markdown

[Ye et al. "Auxiliary Tasks Speed up Learning Point Goal Navigation." Conference on Robot Learning, 2020.](https://mlanthology.org/corl/2020/ye2020corl-auxiliary/)

BibTeX

@inproceedings{ye2020corl-auxiliary,
  title     = {{Auxiliary Tasks Speed up Learning Point Goal Navigation}},
  author    = {Ye, Joel and Batra, Dhruv and Wijmans, Erik and Das, Abhishek},
  booktitle = {Conference on Robot Learning},
  year      = {2020},
  pages     = {498-516},
  volume    = {155},
  url       = {https://mlanthology.org/corl/2020/ye2020corl-auxiliary/}
}