Task Vector Quantization for Memory-Efficient Model Merging

Abstract

Model merging enables efficient multi-task models by combining task-specific fine-tuned checkpoints. However, storing multiple task-specific checkpoints requires significant memory, limiting scalability and restricting model merging to larger models and diverse tasks. In this paper, we propose quantizing task vectors (i.e., the difference between pre-trained and fine-tuned checkpoints) instead of quantizing fine-tuned checkpoints. We observe that task vectors exhibit a narrow weight range, enabling low-precision quantization (<= 4 bit) within existing task vector merging frameworks. To further mitigate quantization errors within ultra-low bit precision (e.g., 2 bit), we introduce Residual Task Vector Quantization, which decomposes the task vector into a base vector and offset component. We allocate bits based on quantization sensitivity, ensuring precision while minimizing error within a memory budget. Experiments on image classification and dense prediction show our method maintains or improves model merging performance while using only 8% of the memory required for full-precision checkpoints. Our code is available at https://aim-skku.github.io/TVQ/.

Cite

Text

Kim et al. "Task Vector Quantization for Memory-Efficient Model Merging." International Conference on Computer Vision, 2025.

Markdown

[Kim et al. "Task Vector Quantization for Memory-Efficient Model Merging." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/kim2025iccv-task/)

BibTeX

@inproceedings{kim2025iccv-task,
  title     = {{Task Vector Quantization for Memory-Efficient Model Merging}},
  author    = {Kim, Youngeun and Lee, Seunghwan and Jung, Aecheon and Ryu, Bogon and Hong, Sungeun},
  booktitle = {International Conference on Computer Vision},
  year      = {2025},
  pages     = {20105-20115},
  url       = {https://mlanthology.org/iccv/2025/kim2025iccv-task/}
}