Model Merging by Uncertainty-Based Gradient Matching

Abstract

Models trained on different datasets can be merged by a weighted-averaging of their parameters, but why does it work and when can it fail? Here, we connect the inaccuracy of weighted-averaging to mismatches in the gradients and propose a new uncertainty-based scheme to improve the performance by reducing the mismatch. The connection also reveals implicit assumptions in other schemes such as averaging, task arithmetic, and Fisher-weighted averaging. Our new method gives consistent improvements for large language models and vision transformers, both in terms of performance and robustness to hyperparameters.

Cite

Text

Daheim et al. "Model Merging by Uncertainty-Based Gradient Matching." International Conference on Learning Representations, 2024.

Markdown

[Daheim et al. "Model Merging by Uncertainty-Based Gradient Matching." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/daheim2024iclr-model/)

BibTeX

@inproceedings{daheim2024iclr-model,
  title     = {{Model Merging by Uncertainty-Based Gradient Matching}},
  author    = {Daheim, Nico and Möllenhoff, Thomas and Ponti, Edoardo and Gurevych, Iryna and Khan, Mohammad Emtiyaz},
  booktitle = {International Conference on Learning Representations},
  year      = {2024},
  url       = {https://mlanthology.org/iclr/2024/daheim2024iclr-model/}
}