Federated Frank-Wolfe Algorithm
Abstract
Federated learning (FL) has gained much attention in recent years for building privacy-preserving collaborative learning systems. However, FL algorithms for constrained machine learning problems are still very limited, particularly when the projection step is costly. To this end, we propose a Federated Frank-Wolfe Algorithm (FedFW). FedFW provably finds an $\varepsilon$-suboptimal solution of the constrained empirical risk-minimization problem after $\mathcal{O}(\varepsilon^{-2})$ iterations if the objective function is convex. The rate becomes $\mathcal{O}(\varepsilon^{-3})$ if the objective is non-convex. The method enjoys data privacy, low per-iteration cost and communication of sparse signals. We demonstrate empirical performance of the FedFW algorithm on several machine learning tasks.
Cite
Text
Dadras et al. "Federated Frank-Wolfe Algorithm." NeurIPS 2022 Workshops: Federated_Learning, 2022.Markdown
[Dadras et al. "Federated Frank-Wolfe Algorithm." NeurIPS 2022 Workshops: Federated_Learning, 2022.](https://mlanthology.org/neuripsw/2022/dadras2022neuripsw-federated/)BibTeX
@inproceedings{dadras2022neuripsw-federated,
title = {{Federated Frank-Wolfe Algorithm}},
author = {Dadras, Ali and Prakhya, Karthik and Yurtsever, Alp},
booktitle = {NeurIPS 2022 Workshops: Federated_Learning},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/dadras2022neuripsw-federated/}
}