Improving LoRA in Privacy-Preserving Federated Learning
Abstract
Low-rank adaptation (LoRA) is one of the most popular task-specific parameter-efficient fine-tuning (PEFT) methods on pre-trained language models for its good performance and computational efficiency. LoRA injects a product of two trainable rank decomposition matrices over the top of each frozen pre-trained model module. However, when applied in the setting of privacy-preserving federated learning (FL), LoRA may become unstable due to the following facts: 1) the effects of data heterogeneity and multi-step local updates are non-negligible, 2) additive noise enforced on updating gradients to guarantee differential privacy (DP) can be amplified and 3) the final performance is susceptible to hyper-parameters. A key factor leading to these phenomena is the discordance between jointly optimizing the two low-rank matrices by local clients and separately aggregating them by the central server. Thus, this paper proposes an efficient and effective version of LoRA, Federated Freeze A LoRA (FFA-LoRA), to alleviate these challenges and further halve the communication cost of federated fine-tuning LLMs. The core idea of FFA-LoRA is to fix the randomly initialized non-zero matrices and only fine-tune the zero-initialized matrices. Compared to LoRA, FFA-LoRA is motivated by practical and theoretical benefits in privacy-preserved FL. Our experiments demonstrate that FFA-LoRA provides more consistent performance with better computational efficiency over vanilla LoRA in various FL tasks.
Cite
Text
Sun et al. "Improving LoRA in Privacy-Preserving Federated Learning." International Conference on Learning Representations, 2024.Markdown
[Sun et al. "Improving LoRA in Privacy-Preserving Federated Learning." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/sun2024iclr-improving/)BibTeX
@inproceedings{sun2024iclr-improving,
title = {{Improving LoRA in Privacy-Preserving Federated Learning}},
author = {Sun, Youbang and Li, Zitao and Li, Yaliang and Ding, Bolin},
booktitle = {International Conference on Learning Representations},
year = {2024},
url = {https://mlanthology.org/iclr/2024/sun2024iclr-improving/}
}