Tensor-Aggregated LoRA in Federated Fine-Tuning
Abstract
The combination of Large Language Models (LLMs) and Federated Learning (FL) to leverage privacy-preserving data has emerged as a promising approach to further enhance the Parameter-Efficient Fine-Tuning (PEFT) capabilities of LLMs. In real-world FL settings with resource heterogeneity, the training process of Low-Rank Adaptation (LoRA), the representative PEFT method, still faces two major challenges: aggregation noise and aggregation misalignment. In this paper, we propose a novel Tensor-aggregated LoRA (Te-LoRA) in Federated Fine-tuning based on an alternating-freeze training strategy to avoid aggregating noise without additional server-side computational costs, while mitigating aggregation suboptimality caused by parameter misalignment between heterogeneous LoRAs. Especially in addressing the aggregation suboptimality issue, we design the Pre-Aggregation Alignment strategy (PAA-strategy) and Tensor-to-Matrix strategy (T2M-strategy) for aligning heterogeneous LoRAs and aggregating them into an united tensor, which is then decomposed into matrices adapted for client download. Extensive experiments demonstrate the effectiveness and robustness of Te-LoRA in both homogeneous and heterogeneous settings.
Cite
Text
Li et al. "Tensor-Aggregated LoRA in Federated Fine-Tuning." International Conference on Computer Vision, 2025.Markdown
[Li et al. "Tensor-Aggregated LoRA in Federated Fine-Tuning." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/li2025iccv-tensoraggregated/)BibTeX
@inproceedings{li2025iccv-tensoraggregated,
title = {{Tensor-Aggregated LoRA in Federated Fine-Tuning}},
author = {Li, Zhixuan and Xu, Binqian and Shu, Xiangbo and Zhang, Jiachao and Yao, Yazhou and Xie, Guo-Sen and Tang, Jinhui},
booktitle = {International Conference on Computer Vision},
year = {2025},
pages = {1058-1067},
url = {https://mlanthology.org/iccv/2025/li2025iccv-tensoraggregated/}
}