Mm-Embed: Universal Multimodal Retrieval with Multimodal LLMs

Abstract

State-of-the-art retrieval models typically address a straightforward search scenario, in which retrieval tasks are fixed (e.g., finding a passage to answer a specific question) and only a single modality is supported for both queries and retrieved results. This paper introduces techniques for advancing information retrieval with multimodal large language models (MLLMs), enabling a broader search scenario, termed universal multimodal retrieval, where multiple modalities and diverse retrieval tasks are accommodated. To this end, we first study fine-tuning an MLLM as a bi-encoder retriever on 10 datasets with 16 retrieval tasks. Our empirical results show that the fine-tuned MLLM retriever is capable of understanding challenging queries, composed of both text and image, but it underperforms compared to a smaller CLIP retriever in cross-modal retrieval tasks due to the modality bias exhibited by MLLMs. To address the issue, we propose modality-aware hard negative mining to mitigate the modality bias exhibited by MLLM retrievers. Second, we propose continuously fine-tuning the universal multimodal retriever to enhance its text retrieval capability while preserving multimodal retrieval capability. As a result, our model, MM-Embed, achieves state-of-the-art performance on the multimodal retrieval benchmark M-BEIR, which spans multiple domains and tasks, while also surpassing the state-of-the-art text retrieval model, NV-Embed-v1, on the MTEB retrieval benchmark. Finally, we explore prompting the off-the-shelf MLLMs as zero-shot rerankers to refine the ranking of the candidates from the multimodal retriever. We find that, through prompt-and-reranking, MLLMs can further improve multimodal retrieval when the user queries (e.g., text-image composed queries) are more complex and challenging to understand. These findings also pave the way for advancing universal multimodal retrieval in the future. We release the model weights at: https://huggingface.co/nvidia/MM-Embed.

Cite

Text

Lin et al. "Mm-Embed: Universal Multimodal Retrieval with Multimodal LLMs." International Conference on Learning Representations, 2025.

Markdown

[Lin et al. "Mm-Embed: Universal Multimodal Retrieval with Multimodal LLMs." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/lin2025iclr-mmembed/)

BibTeX

@inproceedings{lin2025iclr-mmembed,
  title     = {{Mm-Embed: Universal Multimodal Retrieval with Multimodal LLMs}},
  author    = {Lin, Sheng-Chieh and Lee, Chankyu and Shoeybi, Mohammad and Lin, Jimmy and Catanzaro, Bryan and Ping, Wei},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/lin2025iclr-mmembed/}
}