Understanding and Improving Transfer Learning of Deep Models via Neural Collapse
Abstract
With the ever-increasing complexity of large-scale pre-trained models coupled with a shortage of labeled data for downstream training, transfer learning has become the primary approach in many fields, including natural language processing, computer vision, and multi-modal learning. Despite recent progress, the fine-tuning process for large-scale pre-trained models in vision still mostly relies on trial and error. This work investigates the relationship between neural collapse (NC) and transfer learning for classification problems. NC is an intriguing while prevalent phenomenon that has been recently discovered in terms of the final-layer features and linear classifiers of trained neural networks. Specifically, during the terminal phase of training, NC implies that the variability of the features within each class diminishes to zero, while the means of features between classes are maximally and equally distanced. In this work, we examine the NC attributes of pre-trained models on both downstream and training data for transfer learning, and we find strong correlation between feature collapse and downstream performance. In particular, we discovered a systematic pattern that emerges when linear probing pre-trained models on downstream training data: the more feature collapse of pre-trained models on downstream data, the higher the transfer accuracy. Additionally, we also studied the relationship between NC and transfer accuracy on the training data. Moreover, these findings allow us to develop a principled, parameter-efficient fine-tuning method that employs skip-connection to induce the last-layer feature collapse on downstream data. Our proposed fine-tuning methods deliver good performances while reducing fine-tuning parameters by at least 90\% and mitigating overfitting in situations especially when the downstream data is scarce.
Cite
Text
Li et al. "Understanding and Improving Transfer Learning of Deep Models via Neural Collapse." Transactions on Machine Learning Research, 2024.Markdown
[Li et al. "Understanding and Improving Transfer Learning of Deep Models via Neural Collapse." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/li2024tmlr-understanding-a/)BibTeX
@article{li2024tmlr-understanding-a,
title = {{Understanding and Improving Transfer Learning of Deep Models via Neural Collapse}},
author = {Li, Xiao and Liu, Sheng and Zhou, Jinxin and Lu, Xinyu and Fernandez-Granda, Carlos and Zhu, Zhihui and Qu, Qing},
journal = {Transactions on Machine Learning Research},
year = {2024},
url = {https://mlanthology.org/tmlr/2024/li2024tmlr-understanding-a/}
}