Intention-Net: Integrating Planning and Deep Learning for Goal-Directed Autonomous Navigation

Abstract

How can a delivery robot navigate reliably to a destination in a new office building, with minimal prior information? To tackle this challenge, this paper introduces a two-level hierarchical approach, which integrates model-free deep learning and model-based path planning. At the low level, a neural-network motion controller, called the intention-net, is trained end-to-end to provide robust local navigation. The intention-net maps images from a single monocular camera and "intentions" directly to robot controls. At the high level, a path planner uses a crude map, e.g., a 2-D floor plan, to compute a path from the robot's current location to the goal. The planned path provides intentions to the intention-net. Preliminary experiments suggest that the learned motion controller is robust against perceptual uncertainty and by integrating with a path planner, it generalizes effectively to new environments and goals.

Cite

Text

Gao et al. "Intention-Net: Integrating Planning and Deep Learning for Goal-Directed Autonomous Navigation." Conference on Robot Learning, 2017.

Markdown

[Gao et al. "Intention-Net: Integrating Planning and Deep Learning for Goal-Directed Autonomous Navigation." Conference on Robot Learning, 2017.](https://mlanthology.org/corl/2017/gao2017corl-intention/)

BibTeX

@inproceedings{gao2017corl-intention,
  title     = {{Intention-Net: Integrating Planning and Deep Learning for Goal-Directed Autonomous Navigation}},
  author    = {Gao, Wei and Hsu, David and Lee, Wee Sun and Shen, Shengmei and Subramanian, Karthikk},
  booktitle = {Conference on Robot Learning},
  year      = {2017},
  pages     = {185-194},
  url       = {https://mlanthology.org/corl/2017/gao2017corl-intention/}
}