Parameter Efficient Quasi-Orthogonal Fine-Tuning via Givens Rotation
Abstract
With the increasingly powerful performances and enormous scales of pretrained models, promoting parameter efficiency in fine-tuning has become a crucial need for effective and efficient adaptation to various downstream tasks. One representative line of fine-tuning methods is Orthogonal Fine-tuning (OFT), which rigorously preserves the angular distances within the parameter space to preserve the pretrained knowledge. Despite the empirical effectiveness, OFT still suffers low parameter efficiency at $\mathcal{O}(d^2)$ and limited capability of downstream adaptation. Inspired by Givens rotation, in this paper, we proposed quasi-Givens Orthogonal Fine-Tuning (qGOFT) to address the problems. We first use $\mathcal{O}(d)$ Givens rotations to accomplish arbitrary orthogonal transformation in $SO(d)$ with provable equivalence, reducing parameter complexity from $\mathcal{O}(d^2)$ to $\mathcal{O}(d)$. Then we introduce flexible norm and relative angular adjustments under soft orthogonality regularization to enhance the adaptation capability of downstream semantic deviations. Extensive experiments on various tasks and pretrained models validate the effectiveness of our methods.
Cite
Text
Ma et al. "Parameter Efficient Quasi-Orthogonal Fine-Tuning via Givens Rotation." International Conference on Machine Learning, 2024.Markdown
[Ma et al. "Parameter Efficient Quasi-Orthogonal Fine-Tuning via Givens Rotation." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/ma2024icml-parameter/)BibTeX
@inproceedings{ma2024icml-parameter,
title = {{Parameter Efficient Quasi-Orthogonal Fine-Tuning via Givens Rotation}},
author = {Ma, Xinyu and Chu, Xu and Yang, Zhibang and Lin, Yang and Gao, Xin and Zhao, Junfeng},
booktitle = {International Conference on Machine Learning},
year = {2024},
pages = {33686-33729},
volume = {235},
url = {https://mlanthology.org/icml/2024/ma2024icml-parameter/}
}