GrabS: Generative Embodied Agent for 3D Object Segmentation Without Scene Supervision

Abstract

We study the hard problem of 3D object segmentation in complex point clouds without requiring human labels of 3D scenes for supervision. By relying on the similarity of pretrained 2D features or external signals such as motion to group 3D points as objects, existing unsupervised methods are usually limited to identifying simple objects like cars or their segmented objects are often inferior due to the lack of objectness in pretrained features. In this paper, we propose a new two- stage pipeline called GrabS. The core concept of our method is to learn generative and discriminative object-centric priors as a foundation from object datasets in the first stage, and then design an embodied agent to learn to discover multiple ob- jects by querying against the pretrained generative priors in the second stage. We extensively evaluate our method on two real-world datasets and a newly created synthetic dataset, demonstrating remarkable segmentation performance, clearly surpassing all existing unsupervised methods.

Cite

Text

Zhang et al. "GrabS: Generative Embodied Agent for 3D Object Segmentation Without Scene Supervision." International Conference on Learning Representations, 2025.

Markdown

[Zhang et al. "GrabS: Generative Embodied Agent for 3D Object Segmentation Without Scene Supervision." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/zhang2025iclr-grabs/)

BibTeX

@inproceedings{zhang2025iclr-grabs,
  title     = {{GrabS: Generative Embodied Agent for 3D Object Segmentation Without Scene Supervision}},
  author    = {Zhang, Zihui and Yang, Yafei and Wen, Hongtao and Yang, Bo},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/zhang2025iclr-grabs/}
}