Quantum-PEFT: Ultra Parameter-Efficient Fine-Tuning

Abstract

This paper introduces Quantum-PEFT that leverages quantum computations for parameter-efficient fine-tuning (PEFT). Unlike other additive PEFT methods, such as low-rank adaptation (LoRA), Quantum-PEFT exploits an underlying full-rank yet surprisingly parameter efficient _quantum unitary parameterization_. With the use of Pauli parameterization, the number of trainable parameters grows only logarithmically with the ambient dimension, as opposed to linearly as in LoRA-based PEFT methods. Quantum-PEFT achieves vanishingly smaller number of trainable parameters than the lowest-rank LoRA as dimensions grow, enhancing parameter efficiency while maintaining a competitive performance. We apply Quantum-PEFT to several transfer learning benchmarks in language and vision, demonstrating significant advantages in parameter efficiency.

Cite

Text

Koike-Akino et al. "Quantum-PEFT: Ultra Parameter-Efficient Fine-Tuning." International Conference on Learning Representations, 2025.

Markdown

[Koike-Akino et al. "Quantum-PEFT: Ultra Parameter-Efficient Fine-Tuning." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/koikeakino2025iclr-quantumpeft/)

BibTeX

@inproceedings{koikeakino2025iclr-quantumpeft,
  title     = {{Quantum-PEFT: Ultra Parameter-Efficient Fine-Tuning}},
  author    = {Koike-Akino, Toshiaki and Tonin, Francesco and Wu, Yongtao and Wu, Frank Zhengqing and Candogan, Leyla Naz and Cevher, Volkan},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/koikeakino2025iclr-quantumpeft/}
}