Federated Multi-Objective Learning

Abstract

In recent years, multi-objective optimization (MOO) emerges as a foundational problem underpinning many multi-agent multi-task learning applications. However, existing algorithms in MOO literature remain limited to centralized learning settings, which do not satisfy the distributed nature and data privacy needs of such multi-agent multi-task learning applications. This motivates us to propose a new federated multi-objective learning (FMOL) framework with multiple clients distributively and collaboratively solving an MOO problem while keeping their training data private. Notably, our FMOL framework allows a different set of objective functions across different clients to support a wide range of applications, which advances and generalizes the MOO formulation to the federated learning paradigm for the first time. For this FMOL framework, we propose two new federated multi-objective optimization (FMOO) algorithms called federated multi-gradient descent averaging (FMGDA) and federated stochastic multi-gradient descent averaging (FSMGDA). Both algorithms allow local updates to significantly reduce communication costs, while achieving the {\em same} convergence rates as those of their algorithmic counterparts in the single-objective federated learning. Our extensive experiments also corroborate the efficacy of our proposed FMOO algorithms.

Cite

Text

Yang et al. "Federated Multi-Objective Learning." Neural Information Processing Systems, 2023.

Markdown

[Yang et al. "Federated Multi-Objective Learning." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/yang2023neurips-federated/)

BibTeX

@inproceedings{yang2023neurips-federated,
  title     = {{Federated Multi-Objective Learning}},
  author    = {Yang, Haibo and Liu, Zhuqing and Liu, Jia and Dong, Chaosheng and Momma, Michinari},
  booktitle = {Neural Information Processing Systems},
  year      = {2023},
  url       = {https://mlanthology.org/neurips/2023/yang2023neurips-federated/}
}