Federated Dynamical Low-Rank Training with Global Loss Convergence Guarantees
Abstract
In this work, we propose a federated dynamical low-rank training (FeDLRT) scheme to reduce client compute and communication costs - two significant performance bottlenecks in horizontal federated learning. Our method builds upon dynamical low-rank splitting schemes for manifold-constrained optimization to create a global low-rank basis of network weights, which enables client training on a small coefficient matrix. A consistent global low-rank basis allows us to incorporate a variance correction scheme and prove global loss descent and convergence to a stationary point. Dynamic augmentation and truncation of the low-rank bases automatically optimizes computing and communication resource utilization. We demonstrate the efficiency of FeDLRT in an array of computer vision benchmarks and show a reduction of client compute and communication costs by up to an order of magnitude with minimal impacts on global accuracy.
Cite
Text
Schotthöfer and Laiu. "Federated Dynamical Low-Rank Training with Global Loss Convergence Guarantees." NeurIPS 2024 Workshops: Federated_Learning, 2024.Markdown
[Schotthöfer and Laiu. "Federated Dynamical Low-Rank Training with Global Loss Convergence Guarantees." NeurIPS 2024 Workshops: Federated_Learning, 2024.](https://mlanthology.org/neuripsw/2024/schotthofer2024neuripsw-federated/)BibTeX
@inproceedings{schotthofer2024neuripsw-federated,
title = {{Federated Dynamical Low-Rank Training with Global Loss Convergence Guarantees}},
author = {Schotthöfer, Steffen and Laiu, M. Paul},
booktitle = {NeurIPS 2024 Workshops: Federated_Learning},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/schotthofer2024neuripsw-federated/}
}