Revisiting the Parameter Efficiency of Adapters from the Perspective of Precision Redundancy

Abstract

Current state-of-the-art results in computer vision depend in part on fine-tuning large pre-trained vision models. However, with the exponential growth of model sizes, the conventional full fine-tuning, which needs to store a individual network copy for each tasks, leads to increasingly huge storage and transmission overhead. Adapter-based Parameter-Efficient Tuning (PET) methods address this challenge by tuning lightweight adapters inserted into the frozen pre-trained models. In this paper, we investigate how to make adapters even more efficient, reaching a new minimum size required to store a task-specific fine-tuned network. Inspired by the observation that the parameters of adapters converge at flat local minima, we find that adapters are resistant to noise in parameter space, which means they are also resistant to low numerical precision. To train low-precision adapters, we propose a computational-efficient quantization method which minimizes the quantization error. Through extensive experiments, we find that low-precision adapters exhibit minimal performance degradation, and even 1-bit precision is sufficient for adapters. The results of the experiments demonstrate that 1-bit adapters outperform all other PET methods on both the VTAB-1K benchmark and few-shot FGVC datasets, while requiring the smallest storage size. Our findings show, for the first time, the significant potential of quantization techniques in PET, providing a general solution to enhance the parameter efficiency of adapter-based PET methods.

Cite

Text

Jie et al. "Revisiting the Parameter Efficiency of Adapters from the Perspective of Precision Redundancy." International Conference on Computer Vision, 2023. doi:10.1109/ICCV51070.2023.01579

Markdown

[Jie et al. "Revisiting the Parameter Efficiency of Adapters from the Perspective of Precision Redundancy." International Conference on Computer Vision, 2023.](https://mlanthology.org/iccv/2023/jie2023iccv-revisiting/) doi:10.1109/ICCV51070.2023.01579

BibTeX

@inproceedings{jie2023iccv-revisiting,
  title     = {{Revisiting the Parameter Efficiency of Adapters from the Perspective of Precision Redundancy}},
  author    = {Jie, Shibo and Wang, Haoqing and Deng, Zhi-Hong},
  booktitle = {International Conference on Computer Vision},
  year      = {2023},
  pages     = {17217-17226},
  doi       = {10.1109/ICCV51070.2023.01579},
  url       = {https://mlanthology.org/iccv/2023/jie2023iccv-revisiting/}
}