LLaFS: When Large Language Models Meet Few-Shot Segmentation

Abstract

This paper proposes LLaFS the first attempt to leverage large language models (LLMs) in few-shot segmentation. In contrast to the conventional few-shot segmentation methods that only rely on the limited and biased information from the annotated support images LLaFS leverages the vast prior knowledge gained by LLM as an effective supplement and directly uses the LLM to segment images in a few-shot manner. To enable the text-based LLM to handle image-related tasks we carefully design an input instruction that allows the LLM to produce segmentation results represented as polygons and propose a region-attribute table to simulate the human visual mechanism and provide multi-modal guidance. We also synthesize pseudo samples and use curriculum learning for pretraining to augment data and achieve better optimization. LLaFS achieves state-of-the-art results on multiple datasets showing the potential of using LLMs for few-shot computer vision tasks.

Cite

Text

Zhu et al. "LLaFS: When Large Language Models Meet Few-Shot Segmentation." Conference on Computer Vision and Pattern Recognition, 2024. doi:10.1109/CVPR52733.2024.00296

Markdown

[Zhu et al. "LLaFS: When Large Language Models Meet Few-Shot Segmentation." Conference on Computer Vision and Pattern Recognition, 2024.](https://mlanthology.org/cvpr/2024/zhu2024cvpr-llafs/) doi:10.1109/CVPR52733.2024.00296

BibTeX

@inproceedings{zhu2024cvpr-llafs,
  title     = {{LLaFS: When Large Language Models Meet Few-Shot Segmentation}},
  author    = {Zhu, Lanyun and Chen, Tianrun and Ji, Deyi and Ye, Jieping and Liu, Jun},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2024},
  pages     = {3065-3075},
  doi       = {10.1109/CVPR52733.2024.00296},
  url       = {https://mlanthology.org/cvpr/2024/zhu2024cvpr-llafs/}
}