Distilled Feature Fields Enable Few-Shot Language-Guided Manipulation

Abstract

Self-supervised and language-supervised image models contain rich knowledge of the world that is important for generalization. Many robotic tasks, however, require a detailed understanding of 3D geometry, which is often lacking in 2D image features. This work bridges this 2D-to-3D gap for robotic manipulation by leveraging distilled feature fields to combine accurate 3D geometry with rich semantics from 2D foundation models. We present a few-shot learning method for 6-DOF grasping and placing that harnesses these strong spatial and semantic priors to achieve in-the-wild generalization to unseen objects. Using features distilled from a vision-language model, CLIP, we present a way to designate novel objects for manipulation via free-text natural language, and demonstrate its ability to generalize to unseen expressions and novel categories of objects. Project website: https://f3rm.csail.mit.edu

Cite

Text

Shen et al. "Distilled Feature Fields Enable Few-Shot Language-Guided Manipulation." Conference on Robot Learning, 2023.

Markdown

[Shen et al. "Distilled Feature Fields Enable Few-Shot Language-Guided Manipulation." Conference on Robot Learning, 2023.](https://mlanthology.org/corl/2023/shen2023corl-distilled/)

BibTeX

@inproceedings{shen2023corl-distilled,
  title     = {{Distilled Feature Fields Enable Few-Shot Language-Guided Manipulation}},
  author    = {Shen, William and Yang, Ge and Yu, Alan and Wong, Jansen and Kaelbling, Leslie Pack and Isola, Phillip},
  booktitle = {Conference on Robot Learning},
  year      = {2023},
  pages     = {405-424},
  volume    = {229},
  url       = {https://mlanthology.org/corl/2023/shen2023corl-distilled/}
}