CABS: Conflict-Aware and Balanced Sparsification for Enhancing Model Merging
Abstract
Model merging based on task vectors, i.e., the parameter differences between fine-tuned models and a shared base model, provides an efficient way to integrate multiple task-specific models into a multitask model without retraining. Recent works have endeavored to address the conflicts between task vectors, one of the significant challenges faced by model merging, through sparsification; however, two issues significantly limit their performance: high parameter overlap and unbalanced weight distribution. To address these issues, we propose a simple yet effective framework called CABS (Conflict-Aware and Balanced Sparsification), consisting of Conflict-Aware Sparsification (CA) and Balanced Sparsification (BS). CA reduces parameter overlap by applying masks during sequential pruning, ensuring that each task vector retains distinct, non-overlapping parameters. BS leverages $n$:$m$ pruning to preserve critical weights while maintaining an even distribution across layers. Our comprehensive experiments demonstrate that CABS outperforms state-of-the-art methods across diverse tasks and model sizes.
Cite
Text
Yang et al. "CABS: Conflict-Aware and Balanced Sparsification for Enhancing Model Merging." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Yang et al. "CABS: Conflict-Aware and Balanced Sparsification for Enhancing Model Merging." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/yang2025icml-cabs/)BibTeX
@inproceedings{yang2025icml-cabs,
title = {{CABS: Conflict-Aware and Balanced Sparsification for Enhancing Model Merging}},
author = {Yang, Zongzhen and Qi, Binhang and Sun, Hailong and Long, Wenrui and Zhao, Ruobing and Gao, Xiang},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {70973-70999},
volume = {267},
url = {https://mlanthology.org/icml/2025/yang2025icml-cabs/}
}