Unleashing the Power of Task-Specific Directions in Parameter Efficient Fine-Tuning

Abstract

Large language models demonstrate impressive performance on downstream tasks, yet requiring extensive resource consumption when fully fine-tuning all parameters. To mitigate this, Parameter Efficient Fine-Tuning (PEFT) strategies, such as LoRA, have been developed. In this paper, we delve into the concept of task-specific directions (TSDs)—critical for transitioning large models from pretrained states to task-specific enhancements in PEFT. We propose a framework to clearly define these directions and explore their properties, and practical utilization challenges. We then introduce a novel approach, LoRA-Dash, which aims to maximize the impact of TSDs during the fine-tuning process, thereby enhancing model performance on targeted tasks. Extensive experiments have conclusively demonstrated the effectiveness of LoRA-Dash, and in-depth analyses further reveal the underlying mechanisms of LoRA-Dash.

Cite

Text

Si et al. "Unleashing the Power of Task-Specific Directions in Parameter Efficient Fine-Tuning." International Conference on Learning Representations, 2025.

Markdown

[Si et al. "Unleashing the Power of Task-Specific Directions in Parameter Efficient Fine-Tuning." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/si2025iclr-unleashing/)

BibTeX

@inproceedings{si2025iclr-unleashing,
  title     = {{Unleashing the Power of Task-Specific Directions in Parameter Efficient Fine-Tuning}},
  author    = {Si, Chongjie and Shi, Zhiyi and Zhang, Shifan and Yang, Xiaokang and Pfister, Hanspeter and Shen, Wei},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/si2025iclr-unleashing/}
}