Distributed Quasi-Newton Method for Fair and Fast Federated Learning
Abstract
Federated learning (FL) is a promising technology that enables edge devices/clients to collaboratively and iteratively train a machine learning model under the coordination of a central server. The most common approach to FL is first-order methods, where clients send their local gradients to the server in each iteration. However, these methods often suffer from slow convergence rates. As a remedy, second-order methods, such as quasi-Newton, can be employed in FL to accelerate its convergence. Unfortunately, similarly to the first-order FL methods, the application of second-order methods in FL can lead to unfair models, achieving high average accuracy while performing poorly on certain clients' local datasets. To tackle this issue, in this paper we introduce a novel second-order FL framework, dubbed distributed quasi-Newton federated learning (DQN-Fed). This approach seeks to ensure fairness while leveraging the fast convergence properties of quasi-Newton methods in the FL context. Specifically, DQN-Fed helps the server update the global model in such a way that (i) all local loss functions decrease to promote fairness, and (ii) the rate of change in local loss functions aligns with that of the quasi-Newton method. We prove the convergence of DQN-Fed and demonstrate its \textit{linear-quadratic} convergence rate. Moreover, we validate the efficacy of DQN-Fed across a range of federated datasets, showing that it surpasses state-of-the-art fair FL methods in fairness, average accuracy and convergence speed. The Code for paper is publicly available at \url{https://anonymous.4open.science/r/DQN-Fed-FDD2}.
Cite
Text
Hamidi and Ye. "Distributed Quasi-Newton Method for Fair and Fast Federated Learning." Transactions on Machine Learning Research, 2025.Markdown
[Hamidi and Ye. "Distributed Quasi-Newton Method for Fair and Fast Federated Learning." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/hamidi2025tmlr-distributed/)BibTeX
@article{hamidi2025tmlr-distributed,
title = {{Distributed Quasi-Newton Method for Fair and Fast Federated Learning}},
author = {Hamidi, Shayan Mohajer and Ye, Linfeng},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://mlanthology.org/tmlr/2025/hamidi2025tmlr-distributed/}
}