Facing the Elephant in the Room: Visual Prompt Tuning or Full Finetuning?
Abstract
As the scale of vision models continues to grow, the emergence of Visual Prompt Tuning (VPT) as a parameter-efficient transfer learning technique has gained attention due to its superior performance compared to traditional full-finetuning. However, the conditions favoring VPT (the "when") and the underlying rationale (the "why") remain unclear. In this paper, we conduct a comprehensive analysis across 19 distinct datasets and tasks. To understand the "when" aspect, we identify the scenarios where VPT proves favorable by two dimensions: task objectives and data distributions. We find that VPT is preferrable when there is 1) a substantial disparity between the original and the downstream task objectives ($e.g.$, transitioning from classification to counting), or 2) a notable similarity in data distributions between the two tasks ($e.g.$, both involve natural images). In exploring the "why" dimension, our results indicate VPT's success cannot be attributed solely to overfitting and optimization considerations. The unique way VPT preserves original features and adds parameters appears to be a pivotal factor. Our study provides insights into VPT's mechanisms, and offers guidance for its optimal utilization.
Cite
Text
Han et al. "Facing the Elephant in the Room: Visual Prompt Tuning or Full Finetuning?." International Conference on Learning Representations, 2024.Markdown
[Han et al. "Facing the Elephant in the Room: Visual Prompt Tuning or Full Finetuning?." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/han2024iclr-facing/)BibTeX
@inproceedings{han2024iclr-facing,
title = {{Facing the Elephant in the Room: Visual Prompt Tuning or Full Finetuning?}},
author = {Han, Cheng and Wang, Qifan and Cui, Yiming and Wang, Wenguan and Huang, Lifu and Qi, Siyuan and Liu, Dongfang},
booktitle = {International Conference on Learning Representations},
year = {2024},
url = {https://mlanthology.org/iclr/2024/han2024iclr-facing/}
}