Efficient Adaptive Federated Optimization

Abstract

Adaptive optimization is critical in federated learning, where enabling adaptivity on both the server and client sides has proven essential for achieving optimal performance. However, the scalability of such jointly adaptive systems is often hindered by resource limitations in communication and memory. In this paper, we introduce a class of efficient adaptive algorithms, named $FedAda^2$ and its enhanced version $FedAda^2$++, designed specifically for large-scale, cross-device federated environments. $FedAda^2$ optimizes communication efficiency by avoiding the transfer of preconditioners between the server and clients. Additionally, $FedAda^2$++ extends this approach by incorporating memory-efficient adaptive optimizers on the client side, further reducing on-device memory usage. Theoretically, we demonstrate that $FedAda^2$ and $FedAda^2$++ achieve the same convergence rates for general, non-convex objectives as its more resource-intensive counterparts that directly integrate joint adaptivity. Extensive empirical evaluations on image and text datasets demonstrate both the advantages of joint adaptivity and the effectiveness and efficiency of $FedAda^2$/$FedAda^2$++.

Cite

Text

Lee et al. "Efficient Adaptive Federated Optimization." Advances in Neural Information Processing Systems, 2025.

Markdown

[Lee et al. "Efficient Adaptive Federated Optimization." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/lee2025neurips-efficient/)

BibTeX

@inproceedings{lee2025neurips-efficient,
  title     = {{Efficient Adaptive Federated Optimization}},
  author    = {Lee, Su Hyeong and Sharma, Sidharth and Zaheer, Manzil and Li, Tian},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/lee2025neurips-efficient/}
}