VCoder: Versatile Vision Encoders for Multimodal Large Language Models

Abstract

Humans possess the remarkable skill of Visual Perception the ability to see and understand the seen helping them make sense of the visual world and in turn reason. Multimodal Large Language Models (MLLM) have recently achieved impressive performance on vision-language tasks ranging from visual question-answering and image captioning to visual reasoning and image generation. However when prompted to identify or count (perceive) the entities in a given image existing MLLM systems fail. Working towards developing an accurate MLLM system for perception and reasoning we propose using Versatile vision enCoders (VCoder) as perception eyes for Multimodal LLMs. We feed the VCoder with perception modalities such as segmentation or depth maps improving the MLLM's perception abilities. Secondly we leverage the images from COCO and outputs from off-the-shelf vision perception models to create our COCO Segmentation Text (COST) dataset for training and evaluating MLLMs on the object perception task. Thirdly we introduce metrics to assess the object perception abilities in MLLMs on our COST dataset. Lastly we provide extensive experimental evidence proving the VCoder's improved object-level perception skills over existing Multimodal LLMs including GPT-4V. We open-source our dataset code and models to promote research.

Cite

Text

Jain et al. "VCoder: Versatile Vision Encoders for Multimodal Large Language Models." Conference on Computer Vision and Pattern Recognition, 2024. doi:10.1109/CVPR52733.2024.02644

Markdown

[Jain et al. "VCoder: Versatile Vision Encoders for Multimodal Large Language Models." Conference on Computer Vision and Pattern Recognition, 2024.](https://mlanthology.org/cvpr/2024/jain2024cvpr-vcoder/) doi:10.1109/CVPR52733.2024.02644

BibTeX

@inproceedings{jain2024cvpr-vcoder,
  title     = {{VCoder: Versatile Vision Encoders for Multimodal Large Language Models}},
  author    = {Jain, Jitesh and Yang, Jianwei and Shi, Humphrey},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2024},
  pages     = {27992-28002},
  doi       = {10.1109/CVPR52733.2024.02644},
  url       = {https://mlanthology.org/cvpr/2024/jain2024cvpr-vcoder/}
}