Efficient Adaptation of Pre-Trained Vision Transformer via Householder Transformation

Abstract

A common strategy for Parameter-Efficient Fine-Tuning (PEFT) of pre-trained Vision Transformers (ViTs) involves adapting the model to downstream tasks by learning a low-rank adaptation matrix. This matrix is decomposed into a product of down-projection and up-projection matrices, with the bottleneck dimensionality being crucial for reducing the number of learnable parameters, as exemplified by prevalent methods like LoRA and Adapter. However, these low-rank strategies typically employ a fixed bottleneck dimensionality, which limits their flexibility in handling layer-wise variations. To address this limitation, we propose a novel PEFT approach inspired by Singular Value Decomposition (SVD) for representing the adaptation matrix. SVD decomposes a matrix into the product of a left unitary matrix, a diagonal matrix of scaling values, and a right unitary matrix. We utilize Householder transformations to construct orthogonal matrices that efficiently mimic the unitary matrices, requiring only a vector. The diagonal values are learned in a layer-wise manner, allowing them to flexibly capture the unique properties of each layer. This approach enables the generation of adaptation matrices with varying ranks across different layers, providing greater flexibility in adapting pre-trained models. Experiments on standard downstream vision tasks demonstrate that our method achieves promising fine-tuning performance.

Cite

Text

Dong et al. "Efficient Adaptation of Pre-Trained Vision Transformer via Householder Transformation." Neural Information Processing Systems, 2024. doi:10.52202/079017-3239

Markdown

[Dong et al. "Efficient Adaptation of Pre-Trained Vision Transformer via Householder Transformation." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/dong2024neurips-efficient/) doi:10.52202/079017-3239

BibTeX

@inproceedings{dong2024neurips-efficient,
  title     = {{Efficient Adaptation of Pre-Trained Vision Transformer via Householder Transformation}},
  author    = {Dong, Wei and Sun, Yuan and Yang, Yiting and Zhang, Xing and Lin, Zhijun and Yan, Qingsen and Zhang, Haokui and Wang, Peng and Yang, Yang and Shen, Hengtao},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-3239},
  url       = {https://mlanthology.org/neurips/2024/dong2024neurips-efficient/}
}