LERF: Language Embedded Radiance Fields

Abstract

Humans describe the physical world using natural language to refer to specific 3D locations based on a vast range of properties: visual appearance, semantics, abstract associations, or actionable affordances. In this work we propose Language Embedded Radiance Fields (LERFs), a method for grounding language embeddings from off-the-shelf models like CLIP into NeRF, which enable these types of open-ended language queries in 3D. LERF learns a dense, multi-scale language field inside NeRF by volume rendering CLIP embeddings along training rays, supervising these embeddings across training views to provide multi-view consistency and smooth the underlying language field. After optimization, LERF can extract 3D relevancy maps for a broad range of language prompts interactively in real-time, which has potential use cases in robotics, understanding vision-language models, and interacting with 3D scenes. LERF enables pixel-aligned, zero-shot queries on the distilled 3D CLIP embeddings without relying on region proposals or masks, supporting long-tail open-vocabulary queries hierarchically across the volume. See the project website at: https://lerf.io

Cite

Text

Kerr et al. "LERF: Language Embedded Radiance Fields." International Conference on Computer Vision, 2023. doi:10.1109/ICCV51070.2023.01807

Markdown

[Kerr et al. "LERF: Language Embedded Radiance Fields." International Conference on Computer Vision, 2023.](https://mlanthology.org/iccv/2023/kerr2023iccv-lerf/) doi:10.1109/ICCV51070.2023.01807

BibTeX

@inproceedings{kerr2023iccv-lerf,
  title     = {{LERF: Language Embedded Radiance Fields}},
  author    = {Kerr, Justin and Kim, Chung Min and Goldberg, Ken and Kanazawa, Angjoo and Tancik, Matthew},
  booktitle = {International Conference on Computer Vision},
  year      = {2023},
  pages     = {19729-19739},
  doi       = {10.1109/ICCV51070.2023.01807},
  url       = {https://mlanthology.org/iccv/2023/kerr2023iccv-lerf/}
}