Group Privacy for Personalized Federated Learning

Abstract

Federated learning exposes the participating clients to issues of leakage of private information from the client-server communication and the lack of personalization of the global model. To address both the problems, we investigate the use of metric-based local privacy mechanisms and model personalization. These are based on operations performed directly in the parameter space, i.e. sanitization of the model parameters by the clients and clustering of model parameters by the server.

Cite

Text

Galli et al. "Group Privacy for Personalized Federated Learning." NeurIPS 2022 Workshops: Federated_Learning, 2022.

Markdown

[Galli et al. "Group Privacy for Personalized Federated Learning." NeurIPS 2022 Workshops: Federated_Learning, 2022.](https://mlanthology.org/neuripsw/2022/galli2022neuripsw-group/)

BibTeX

@inproceedings{galli2022neuripsw-group,
  title     = {{Group Privacy for Personalized Federated Learning}},
  author    = {Galli, Filippo and Biswas, Sayan and Jung, Kangsoo and Cucinotta, Tommaso and Palamidessi, Catuscia},
  booktitle = {NeurIPS 2022 Workshops: Federated_Learning},
  year      = {2022},
  url       = {https://mlanthology.org/neuripsw/2022/galli2022neuripsw-group/}
}