MERGE$^3$: Efficient Evolutionary Merging on Consumer-Grade GPUs
Abstract
Evolutionary model merging enables the creation of high-performing multi-task models but remains computationally prohibitive for consumer hardware. We introduce MERGE$^3$, an efficient framework that makes evolutionary merging of Large Language Models (LLMs) feasible on a single GPU by reducing fitness computation costs 50$\times$ while retaining a large fraction of the original performance. MERGE$^3$ achieves this by Extracting a reduced dataset for evaluation, Estimating model abilities using Item Response Theory (IRT), and Evolving optimal merges via IRT-based performance estimators. Our method enables state-of-the-art multilingual and cross-lingual merging, transferring knowledge across languages with significantly lower computational overhead. We provide theoretical guarantees and an open-source library, democratizing high-quality model merging.
Cite
Text
Mencattini et al. "MERGE$^3$: Efficient Evolutionary Merging on Consumer-Grade GPUs." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Mencattini et al. "MERGE$^3$: Efficient Evolutionary Merging on Consumer-Grade GPUs." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/mencattini2025icml-merge/)BibTeX
@inproceedings{mencattini2025icml-merge,
title = {{MERGE$^3$: Efficient Evolutionary Merging on Consumer-Grade GPUs}},
author = {Mencattini, Tommaso and Minut, Robert Adrian and Crisostomi, Donato and Santilli, Andrea and Rodolà, Emanuele},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {43694-43715},
volume = {267},
url = {https://mlanthology.org/icml/2025/mencattini2025icml-merge/}
}