Instruction-Guided Multi-Granularity Segmentation and Captioning with Large Multimodal Model
Abstract
Large Multimodal Models (LMMs) have significantly progressed by extending large language models. Building on this progress, the latest developments in LMMs demonstrate the ability to generate dense pixel-wise segmentation by integrating segmentation models. Despite the innovations, existing works’ textual responses and segmentation masks remain at the instance level, showing limited ability to perform fine-grained understanding and segmentation even provided with detailed textual cues. To overcome this limitation, we introduce a Multi-Granularity Large Multimodal Model (MGLMM), which is capable of seamlessly adjusting the granularity of Segmentation and Captioning (SegCap) following user instructions, from panoptic SegCap to fine-grained SegCap. We name such a new task Multi-Granularity Segmentation and Captioning (MGSC). Observing the lack of a benchmark for model training and evaluation over the MGSC task, we establish a benchmark with aligned masks and captions in multi-granularity using our customized automated annotation pipeline. This benchmark comprises 10K images and more than 30K image-question pairs. We will release our dataset along with the implementation of our automated dataset annotation pipeline for further research. Besides, we propose a novel unified SegCap data format to unify heterogeneous segmentation datasets; it effectively facilitates learning to associate object concepts with visual features during multi-task training. Extensive experiments demonstrate that our MGLMM excels at tackling more than eight downstream tasks and achieves state-of-the-art performance in MGSC, GCG, image captioning, referring segmentation, multiple/empty segmentation, and reasoning segmentation. The great properties and versatility of MGLMM underscore its potential impact on advancing multimodal research.
Cite
Text
Yuan et al. "Instruction-Guided Multi-Granularity Segmentation and Captioning with Large Multimodal Model." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I9.33054Markdown
[Yuan et al. "Instruction-Guided Multi-Granularity Segmentation and Captioning with Large Multimodal Model." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/yuan2025aaai-instruction/) doi:10.1609/AAAI.V39I9.33054BibTeX
@inproceedings{yuan2025aaai-instruction,
title = {{Instruction-Guided Multi-Granularity Segmentation and Captioning with Large Multimodal Model}},
author = {Yuan, Xu and Zhou, Li and Sun, Zenghui and Zhou, Zikun and Lan, Jinsong},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2025},
pages = {9725-9733},
doi = {10.1609/AAAI.V39I9.33054},
url = {https://mlanthology.org/aaai/2025/yuan2025aaai-instruction/}
}