Efficient Language Model Architectures for Differentially Private Federated Learning
Abstract
Cross-device federated learning (FL) is a technique that trains a model on data distributed across typically millions of edge devices without data leaving the devices. SGD is the standard client optimizer for on device training in cross-device FL, favored for its memory and computational efficiency. However, in centralized training of neural language models, adaptive optimizers are preferred as they offer improved stability and performance. In light of this, we ask if language models can be modified such that they can be efficiently trained with SGD client optimizers and answer this affirmatively. We propose a scale-invariant \emph{Coupled Input Forget Gate} (SI CIFG) recurrent network by modifying the sigmoid and tanh activations in the recurrent cell and show that this new model converges faster and achieves better utility than the standard CIFG recurrent model in cross-device FL in large scale experiments. We further show that the proposed scale invariant modification also helps in federated learning of larger transformer models. Finally, we demonstrate the scale invariant modification is also compatible with other non-adaptive algorithms. Particularly, our results suggest an improved privacy utility trade-off in federated learning with differential privacy.
Cite
Text
Ro et al. "Efficient Language Model Architectures for Differentially Private Federated Learning." ICLR 2024 Workshops: PML, 2024.Markdown
[Ro et al. "Efficient Language Model Architectures for Differentially Private Federated Learning." ICLR 2024 Workshops: PML, 2024.](https://mlanthology.org/iclrw/2024/ro2024iclrw-efficient/)BibTeX
@inproceedings{ro2024iclrw-efficient,
title = {{Efficient Language Model Architectures for Differentially Private Federated Learning}},
author = {Ro, Jae Hun and Bhojanapalli, Srinadh and Xu, Zheng and Zhang, Yanxiang and Suresh, Ananda Theertha},
booktitle = {ICLR 2024 Workshops: PML},
year = {2024},
url = {https://mlanthology.org/iclrw/2024/ro2024iclrw-efficient/}
}