Scalable Model Merging with Progressive Layer-Wise Distillation

Abstract

Model merging offers an effective way to integrate the capabilities of multiple fine-tuned models. However, the performance degradation of the merged model remains a challenge, particularly when none or few data are available. This paper first highlights the necessity of domain-specific data for model merging by proving that data-agnostic algorithms can have arbitrarily bad worst-case performances. Building on this theoretical insight, we explore the relationship between model merging and distillation, introducing a novel few-shot merging algorithm, ProDistill (Progressive Layer-wise Distillation). Unlike common belief that layer-wise training hurts performance, we show that layer-wise teacher-student distillation not only enhances the scalability but also improves model merging performance. We conduct extensive experiments to show that compared to existing few-shot merging methods,ProDistill achieves state-of-the-art performance, with up to 6.14% and 6.61% improvements in vision and NLU tasks. Furthermore, we extend the experiments to models with over 10B parameters, showcasing the exceptional scalability of ProDistill.

Cite

Text

Xu et al. "Scalable Model Merging with Progressive Layer-Wise Distillation." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Xu et al. "Scalable Model Merging with Progressive Layer-Wise Distillation." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/xu2025icml-scalable/)

BibTeX

@inproceedings{xu2025icml-scalable,
  title     = {{Scalable Model Merging with Progressive Layer-Wise Distillation}},
  author    = {Xu, Jing and Li, Jiazheng and Zhang, Jingzhao},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {69451-69477},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/xu2025icml-scalable/}
}