Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts
Abstract
The fine-tuning paradigm in addressing long-tail learning tasks has sparked significant interest since the emergence of foundation models. Nonetheless, how fine-tuning impacts performance in long-tail learning was not explicitly quantified. In this paper, we disclose that heavy fine-tuning may even lead to non-negligible performance deterioration on tail classes, and lightweight fine-tuning is more effective. The reason is attributed to inconsistent class conditions caused by heavy fine-tuning. With the observation above, we develop a low-complexity and accurate long-tail learning algorithms LIFT with the goal of facilitating fast prediction and compact models by adaptive lightweight fine-tuning. Experiments clearly verify that both the training time and the learned parameters are significantly reduced with more accurate predictive performance compared with state-of-the-art approaches. The implementation code is available at https://github.com/shijxcs/LIFT.
Cite
Text
Shi et al. "Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts." International Conference on Machine Learning, 2024.Markdown
[Shi et al. "Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/shi2024icml-longtail/)BibTeX
@inproceedings{shi2024icml-longtail,
title = {{Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts}},
author = {Shi, Jiang-Xin and Wei, Tong and Zhou, Zhi and Shao, Jie-Jing and Han, Xin-Yan and Li, Yu-Feng},
booktitle = {International Conference on Machine Learning},
year = {2024},
pages = {45014-45039},
volume = {235},
url = {https://mlanthology.org/icml/2024/shi2024icml-longtail/}
}