Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action

Abstract

Goal-conditioned policies for robotic navigation can be trained on large, unannotated datasets, providing for good generalization to real-world settings. However, particularly in vision-based settings where specifying goals requires an image, this makes for an unnatural interface. Language provides a more convenient modality for communication with robots, but contemporary methods typically require expensive supervision, in the form of trajectories annotated with language descriptions. We develop a system, LM-Nav, for robotic navigation that enjoys the benefits of training on unannotated large datasets of trajectories, while still providing a high-level interface to the user. Instead of utilizing a labeled instruction following dataset, we show that such a system can be constructed entirely out of pre-trained models for navigation (ViNG), image-language association (CLIP), and language modeling (GPT-3), without requiring any fine-tuning or language-annotated robot data. We instantiate LM-Nav on a real-world mobile robot and demonstrate long-horizon navigation through complex, outdoor environments from natural language instructions.

Cite

Text

Shah. "Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action." NeurIPS 2022 Workshops: FMDM, 2022.

Markdown

[Shah. "Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action." NeurIPS 2022 Workshops: FMDM, 2022.](https://mlanthology.org/neuripsw/2022/shah2022neuripsw-robotic/)

BibTeX

@inproceedings{shah2022neuripsw-robotic,
  title     = {{Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action}},
  author    = {Shah, Dhruv},
  booktitle = {NeurIPS 2022 Workshops: FMDM},
  year      = {2022},
  url       = {https://mlanthology.org/neuripsw/2022/shah2022neuripsw-robotic/}
}