LLM at Network Edge: A Layer-Wise Efficient Federated Fine-Tuning Approach
Abstract
Fine-tuning large language models (LLMs) poses significant computational burdens, especially in federated learning (FL) settings. We introduce Layer-wise Efficient Federated Fine-tuning (LEFF), a novel method designed to enhance the efficiency of FL fine-tuning while preserving model performance and minimizing client-side computational overhead. LEFF strategically selects layers for fine-tuning based on client computational capacity, thereby mitigating the straggler effect prevalent in heterogeneous environments. Furthermore, LEFF incorporates an importance-driven layer sampling mechanism, prioritizing layers with greater influence on model performance. Theoretical analysis demonstrates that LEFF achieves a convergence rate of $\mathcal{O}(1/\sqrt{T})$. Extensive experiments on diverse datasets demonstrate that LEFF attains superior computational efficiency and model performance compared to existing federated fine-tuning methods, particularly under heterogeneous conditions.
Cite
Text
Shen et al. "LLM at Network Edge: A Layer-Wise Efficient Federated Fine-Tuning Approach." Advances in Neural Information Processing Systems, 2025.Markdown
[Shen et al. "LLM at Network Edge: A Layer-Wise Efficient Federated Fine-Tuning Approach." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/shen2025neurips-llm-a/)BibTeX
@inproceedings{shen2025neurips-llm-a,
title = {{LLM at Network Edge: A Layer-Wise Efficient Federated Fine-Tuning Approach}},
author = {Shen, Jinglong and Cheng, Nan and Xu, Wenchao and Wang, Haozhao and Guo, Yifan and Xu, Jiajie},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/shen2025neurips-llm-a/}
}