Federated Low-Rank Adaptation for Foundation Models: A Survey

Abstract

Effectively leveraging private datasets remains a significant challenge in developing foundation models. Federated Learning (FL) has recently emerged as a collaborative framework that enables multiple users to fine-tune these models while mitigating data privacy risks. Meanwhile, Low-Rank Adaptation (LoRA) offers a resource-efficient alternative for fine-tuning foundation models by dramatically reducing the number of trainable parameters. This survey examines how LoRA has been integrated into federated fine-tuning for foundation models—an area we term FedLoRA—by focusing on three key challenges: distributed learning, heterogeneity, and efficiency. We further categorize existing work based on the specific methods used to address each challenge. Finally, we discuss open research questions and highlight promising directions for future investigation, outlining the next steps for advancing FedLoRA.

Cite

Text

Yang et al. "Federated Low-Rank Adaptation for Foundation Models: A Survey." International Joint Conference on Artificial Intelligence, 2025. doi:10.24963/IJCAI.2025/1196

Markdown

[Yang et al. "Federated Low-Rank Adaptation for Foundation Models: A Survey." International Joint Conference on Artificial Intelligence, 2025.](https://mlanthology.org/ijcai/2025/yang2025ijcai-federated/) doi:10.24963/IJCAI.2025/1196

BibTeX

@inproceedings{yang2025ijcai-federated,
  title     = {{Federated Low-Rank Adaptation for Foundation Models: A Survey}},
  author    = {Yang, Yiyuan and Long, Guodong and Lu, Qinghua and Zhu, Liming and Jiang, Jing and Zhang, Chengqi},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {10779-10787},
  doi       = {10.24963/IJCAI.2025/1196},
  url       = {https://mlanthology.org/ijcai/2025/yang2025ijcai-federated/}
}