Fast Decentralized Gradient Tracking for Federated Learning with Local Updates: From Mini to Minimax Optimization

Abstract

Federated learning (FL) for mini and minimax optimization has emerged as a powerful paradigm for training models across distributed nodes/clients while preserving data privacy and model robustness on data heterogeneity. In this work, we delve into the decentralized implementation of federated minimax optimization by proposing \texttt{K-GT-Minimax}, a novel decentralized minimax optimization algorithm that combines local updates and gradient tracking techniques. Our analysis showcases the algorithm's communication efficiency and convergence rate for nonconvex-strongly-concave (NC-SC) minimax optimization, demonstrating a superior convergence rate compared to existing methods. \texttt{K-GT-Minimax}'s ability to handle data heterogeneity and ensure robustness underscores its significance in advancing federated learning research and applications.

Cite

Text

Li. "Fast Decentralized Gradient Tracking for Federated Learning with Local Updates: From Mini to Minimax Optimization." NeurIPS 2024 Workshops: OPT, 2024.

Markdown

[Li. "Fast Decentralized Gradient Tracking for Federated Learning with Local Updates: From Mini to Minimax Optimization." NeurIPS 2024 Workshops: OPT, 2024.](https://mlanthology.org/neuripsw/2024/li2024neuripsw-fast/)

BibTeX

@inproceedings{li2024neuripsw-fast,
  title     = {{Fast Decentralized Gradient Tracking for Federated Learning with Local Updates: From Mini to Minimax Optimization}},
  author    = {Li, Chris Junchi},
  booktitle = {NeurIPS 2024 Workshops: OPT},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/li2024neuripsw-fast/}
}