MMTEB: Massive Multilingual Text Embedding Benchmark

Abstract

Text embeddings are typically evaluated on a narrow set of tasks, limited in terms of languages, domains, and task types. To circumvent this limitation and to provide a more comprehensive evaluation, we introduce the Massive Multilingual Text Embedding Benchmark (MMTEB) -- a large-scale community-driven initiative expanding MTEB to over 500 quality-controlled evaluation tasks across 1,000+ languages. MMTEB includes a wide range of challenging novel tasks such as instruction following, long-document retrieval, and code retrieval, and represents the largest multilingual collection of evaluation tasks for embedding models to date. We use this collection to construct multiple highly multilingual benchmarks. We evaluate a representative set of models on these benchmarks. Our findings indicate that, while LLM-based models can achieve state-of-the-art performance on a subset of languages, the best-performing publicly available model across languages is the notably smaller, multilingual-e5-large-instruct. Massive benchmarks often impose high computational demands, limiting accessibility, particularly for low-resource communities. To address this, we downsample tasks based on inter-task correlation (i.e., selecting only a diverse set of tasks) while preserving relative rankings. We further optimize tasks such as retrieval by sampling hard negatives, creating smaller but effective splits. These optimizations allow us to introduce benchmarks at a significantly lower computational cost. For instance, we introduce a new zero-shot English benchmark that maintains a similar ordering at a fraction of the cost.

Cite

Text

Enevoldsen et al. "MMTEB: Massive Multilingual Text Embedding Benchmark." International Conference on Learning Representations, 2025.

Markdown

[Enevoldsen et al. "MMTEB: Massive Multilingual Text Embedding Benchmark." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/enevoldsen2025iclr-mmteb/)

BibTeX

@inproceedings{enevoldsen2025iclr-mmteb,
  title     = {{MMTEB: Massive Multilingual Text Embedding Benchmark}},
  author    = {Enevoldsen, Kenneth and Chung, Isaac and Kerboua, Imene and Kardos, Márton and Mathur, Ashwin and Stap, David and Gala, Jay and Siblini, Wissam and Krzemiński, Dominik and Winata, Genta Indra and Sturua, Saba and Utpala, Saiteja and Ciancone, Mathieu and Schaeffer, Marion and Misra, Diganta and Dhakal, Shreeya and Rystrøm, Jonathan and Solomatin, Roman and Çağatan, Ömer Veysel and Kundu, Akash and Bernstorff, Martin and Xiao, Shitao and Sukhlecha, Akshita and Pahwa, Bhavish and Poświata, Rafał and Gv, Kranthi Kiran and Ashraf, Shawon and Auras, Daniel and Plüster, Björn and Harries, Jan Philipp and Magne, Loïc and Mohr, Isabelle and Zhu, Dawei and Gisserot-Boukhlef, Hippolyte and Aarsen, Tom and Kostkan, Jan and Wojtasik, Konrad and Lee, Taemin and Suppa, Marek and Zhang, Crystina and Rocca, Roberta and Hamdy, Mohammed and Michail, Andrianos and Yang, John and Faysse, Manuel and Vatolin, Aleksei and Thakur, Nandan and Dey, Manan and Vasani, Dipam and Chitale, Pranjal A and Tedeschi, Simone and Tai, Nguyen and Snegirev, Artem and Hendriksen, Mariya and Günther, Michael and Xia, Mengzhou and Shi, Weijia and Lù, Xing Han and Clive, Jordan and K, Gayatri and Anna, Maksimova and Wehrli, Silvan and Tikhonova, Maria and Panchal, Henil Shalin and Abramov, Aleksandr and Ostendorff, Malte and Liu, Zheng and Clematide, Simon and Miranda, Lester James Validad and Fenogenova, Alena and Song, Guangyu and Bin Safi, Ruqiya and Li, Wen-Ding and Borghini, Alessia and Cassano, Federico and Hansen, Lasse and Hooker, Sara and Xiao, Chenghao and Adlakha, Vaibhav and Weller, Orion and Reddy, Siva and Muennighoff, Niklas},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/enevoldsen2025iclr-mmteb/}
}