Federated Fine-Tuning of Large Language Models Under Heterogeneous Tasks and Client Resources

Abstract

Federated Learning (FL) has recently been applied to the parameter-efficient fine-tuning of Large Language Models (LLMs). While promising, it raises significant challenges due to the heterogeneous resources and data distributions of clients.This study introduces FlexLoRA, a simple yet effective aggregation scheme for LLM fine-tuning, which mitigates the "buckets effect" in traditional FL that restricts the potential of clients with ample resources by tying them to the capabilities of the least-resourced participants. FlexLoRA allows for dynamic adjustment of local LoRA ranks, fostering the development of a global model imbued with broader, less task-specific knowledge. By synthesizing a full-size LoRA weight from individual client contributions and employing Singular Value Decomposition (SVD) for weight redistribution, FlexLoRA fully leverages heterogeneous client resources. Involving thousands of clients performing heterogeneous NLP tasks and client resources, our experiments validate the efficacy of FlexLoRA, with the federated global model achieving consistently better improvement over SOTA FL methods in downstream NLP task performance across various heterogeneous distributions. FlexLoRA's practicality is further underscored by our theoretical analysis and its seamless integration with existing LoRA-based FL methods, offering a path toward cross-device, privacy-preserving federated tuning for LLMs.

Cite

Text

Bai et al. "Federated Fine-Tuning of Large Language Models Under Heterogeneous Tasks and Client Resources." Neural Information Processing Systems, 2024. doi:10.52202/079017-0461

Markdown

[Bai et al. "Federated Fine-Tuning of Large Language Models Under Heterogeneous Tasks and Client Resources." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/bai2024neurips-federated/) doi:10.52202/079017-0461

BibTeX

@inproceedings{bai2024neurips-federated,
  title     = {{Federated Fine-Tuning of Large Language Models Under Heterogeneous Tasks and Client Resources}},
  author    = {Bai, Jiamu and Chen, Daoyuan and Qian, Bingchen and Yao, Liuyi and Li, Yaliang},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-0461},
  url       = {https://mlanthology.org/neurips/2024/bai2024neurips-federated/}
}