Federated Sampling with Langevin Algorithm Under Isoperimetry

Abstract

Federated learning uses a set of techniques to efficiently distribute the training of a machine learning algorithm across several devices, who own the training data. These techniques critically rely on reducing the communication cost---the main bottleneck---between the devices and a central server. Federated learning algorithms usually take an optimization approach: they are algorithms for minimizing the training loss subject to communication (and other) constraints. In this work, we instead take a Bayesian approach for the training task, and propose a communication-efficient variant of the Langevin algorithm to sample \textit{a posteriori}. The latter approach is more robust and provides more knowledge of the \textit{a posteriori} distribution than its optimization counterpart. We analyze our algorithm without assuming that the target distribution is strongly log-concave. Instead, we assume the weaker log Sobolev inequality, which allows for nonconvexity.

Cite

Text

Sun et al. "Federated Sampling with Langevin Algorithm Under Isoperimetry." Transactions on Machine Learning Research, 2024.

Markdown

[Sun et al. "Federated Sampling with Langevin Algorithm Under Isoperimetry." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/sun2024tmlr-federated/)

BibTeX

@article{sun2024tmlr-federated,
  title     = {{Federated Sampling with Langevin Algorithm Under Isoperimetry}},
  author    = {Sun, Lukang and Salim, Adil and Richtárik, Peter},
  journal   = {Transactions on Machine Learning Research},
  year      = {2024},
  url       = {https://mlanthology.org/tmlr/2024/sun2024tmlr-federated/}
}