Mitigating Unintended Memorization with LoRA in Federated Learning for LLMs

Abstract

Federated learning (FL) is a popular paradigm for collaborative training which avoids direct data exposure between clients. However, data privacy issues still remain: FL-trained large language models are capable of memorizing and completing phrases and sentences contained in training data when given their prefixes. Thus, it is possible for adversarial and honest-but-curious clients to recover training data of other participants simply through targeted prompting. In this work, we demonstrate that a popular and simple fine-tuning strategy, low-rank adaptation (LoRA), reduces memorization during FL by a factor of up to 10 without significant performance cost. We study this effect by performing fine-tuning tasks in high-risk domains such as medicine, law, and finance. We observe a reduction in memorization for a wide variety of model families, from 1B to 70B parameters. We find that LoRA can reduce memorization in centralized learning as well, and we compare how the memorization patterns differ. Furthermore, we study the effect of hyperparameters and show that LoRA can be combined with other privacy-preserving techniques such as gradient clipping and Gaussian noise, secure aggregation, and Goldfish loss to further improve record-level privacy while maintaining performance.

Cite

Text

Bossy et al. "Mitigating Unintended Memorization with LoRA in Federated Learning for LLMs." Transactions on Machine Learning Research, 2026.

Markdown

[Bossy et al. "Mitigating Unintended Memorization with LoRA in Federated Learning for LLMs." Transactions on Machine Learning Research, 2026.](https://mlanthology.org/tmlr/2026/bossy2026tmlr-mitigating/)

BibTeX

@article{bossy2026tmlr-mitigating,
  title     = {{Mitigating Unintended Memorization with LoRA in Federated Learning for LLMs}},
  author    = {Bossy, Thierry and Vignoud, Julien Tuấn Tú and Rabbani, Tahseen and Pastoriza, Juan R. Troncoso and Jaggi, Martin},
  journal   = {Transactions on Machine Learning Research},
  year      = {2026},
  url       = {https://mlanthology.org/tmlr/2026/bossy2026tmlr-mitigating/}
}