Upweighting Easy Samples in Fine-Tuning Mitigates Forgetting
Abstract
Fine-tuning a pre-trained model on a downstream task often degrades its original capabilities, a phenomenon known as "catastrophic forgetting". This is especially an issue when one does not have access to the data and recipe used to develop the pre-trained model. Under this constraint, most existing methods for mitigating forgetting are inapplicable. To address this challenge, we propose a sample weighting scheme for the fine-tuning data solely based on the pre-trained model’s losses. Specifically, we upweight the easy samples on which the pre-trained model’s loss is low and vice versa to limit the drift from the pre-trained model. Our approach is orthogonal and yet complementary to existing methods; while such methods mostly operate on parameter or gradient space, we concentrate on the sample space. We theoretically analyze the impact of fine-tuning with our method in a linear setting, showing that it stalls learning in a certain subspace, which inhibits overfitting to the target task. We empirically demonstrate the efficacy of our method on both language and vision tasks. As an example, when fine-tuning Gemma 2 2B on MetaMathQA, our method results in only a $0.8$% drop in accuracy on GSM8K (another math dataset) compared to standard fine-tuning, while preserving $5.4$% more accuracy on the pre-training datasets.
Cite
Text
Sanyal et al. "Upweighting Easy Samples in Fine-Tuning Mitigates Forgetting." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Sanyal et al. "Upweighting Easy Samples in Fine-Tuning Mitigates Forgetting." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/sanyal2025icml-upweighting/)BibTeX
@inproceedings{sanyal2025icml-upweighting,
title = {{Upweighting Easy Samples in Fine-Tuning Mitigates Forgetting}},
author = {Sanyal, Sunny and Prairie, Hayden and Das, Rudrajit and Kavis, Ali and Sanghavi, Sujay},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {52922-52957},
volume = {267},
url = {https://mlanthology.org/icml/2025/sanyal2025icml-upweighting/}
}