ParGo: Bridging Vision-Language with Partial and Global Views

Abstract

This work presents ParGo, a novel Partial-Global projector designed to connect the vision and language modalities for Multimodal Large Language Models (MLLMs). Unlike previous works that rely on global attention-based projectors, our ParGo bridges the representation gap between the separately pre-trained vision encoders and the LLMs by integrating global and partial views, which alleviates the overemphasis on prominent regions. To facilitate the effective training of ParGo, we collect a large-scale detail-captioned image-text dataset named ParGoCap-1M-PT, consisting of 1 million images paired with high-quality captions. Extensive experiments on several MLLM benchmarks demonstrate the effectiveness of our ParGo, highlighting its superiority in aligning vision and language modalities. Compared to conventional Q-Former projector, our ParGo achieves an improvement of 259.96 in MME benchmark. Furthermore, our experiments reveal that ParGo significantly outperforms other projectors, particularly in tasks that emphasize detail perception ability.

Cite

Text

Wang et al. "ParGo: Bridging Vision-Language with Partial and Global Views." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I7.32806

Markdown

[Wang et al. "ParGo: Bridging Vision-Language with Partial and Global Views." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/wang2025aaai-pargo/) doi:10.1609/AAAI.V39I7.32806

BibTeX

@inproceedings{wang2025aaai-pargo,
  title     = {{ParGo: Bridging Vision-Language with Partial and Global Views}},
  author    = {Wang, An-Lan and Shan, Bin and Shi, Wei and Lin, Kun-Yu and Fei, Xiang and Tang, Guozhi and Liao, Lei and Tang, Jingqun and Huang, Can and Zheng, Wei-Shi},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {7491-7499},
  doi       = {10.1609/AAAI.V39I7.32806},
  url       = {https://mlanthology.org/aaai/2025/wang2025aaai-pargo/}
}