Pseudo-Embedding for Generalized Few-Shot Point Cloud Segmentation
Abstract
Existing generalized few-shot 3D segmentation (GFS3DS) methods typically prioritize enhancing the training of base-class prototypes while neglecting the rich semantic information within background regions for future novel classes. We introduce a novel GFS3DS learner that strategically leverages background context to improve both base prototype training and few-shot adaptability. Our method employs foundation models to extract semantic features from background points and grounds on text embeddings to cluster background points into pseudo-classes. This approach facilitates clearer base/novel class differentiation and generates pseudo prototypes that effectively mimic novel support samples. Comprehensive experiments on S3DIS and ScanNet datasets demonstrate the state-of-the-art performance of our method in both 1-shot and 5-shot tasks. Our approach significantly advances GFS3DS by unlocking the potential of background context, offering a promising avenue for broader applications. Code: https://github. com/jimtsai23/PseudoEmbed
Cite
Text
Tsai et al. "Pseudo-Embedding for Generalized Few-Shot Point Cloud Segmentation." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-72764-1_22Markdown
[Tsai et al. "Pseudo-Embedding for Generalized Few-Shot Point Cloud Segmentation." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/tsai2024eccv-pseudoembedding/) doi:10.1007/978-3-031-72764-1_22BibTeX
@inproceedings{tsai2024eccv-pseudoembedding,
title = {{Pseudo-Embedding for Generalized Few-Shot Point Cloud Segmentation}},
author = {Tsai, Chih-Jung and Chen, Hwann-Tzong and Liu, Tyng-Luh},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2024},
doi = {10.1007/978-3-031-72764-1_22},
url = {https://mlanthology.org/eccv/2024/tsai2024eccv-pseudoembedding/}
}