Federated Continual Learning with Differentially Private Data Sharing

Abstract

In Federated Learning (FL) many types of skews can occur, including uneven class distributions, or varying client participation. In addition, new tasks and data modalities can be encountered as time passes, which leads us to the problem domain of Federated Continual Learning (FCL). In this work we study how we can adapt some of the simplest, but often most effective, Continual Learning approaches based on replay to FL. We focus on temporal shifts in client behaviour, and show that direct application of replay methods leads to poor results. To address these shortcomings, we explore data sharing between clients employing differential privacy. This alleviates the shortcomings in current baselines, resulting in performance gains in a wide range of cases, with our method achieving maximum gains of 49%.

Cite

Text

Zizzo et al. "Federated Continual Learning with Differentially Private Data Sharing." NeurIPS 2022 Workshops: Federated_Learning, 2022.

Markdown

[Zizzo et al. "Federated Continual Learning with Differentially Private Data Sharing." NeurIPS 2022 Workshops: Federated_Learning, 2022.](https://mlanthology.org/neuripsw/2022/zizzo2022neuripsw-federated/)

BibTeX

@inproceedings{zizzo2022neuripsw-federated,
  title     = {{Federated Continual Learning with Differentially Private Data Sharing}},
  author    = {Zizzo, Giulio and Rawat, Ambrish and Holohan, Naoise and Tirupathi, Seshu},
  booktitle = {NeurIPS 2022 Workshops: Federated_Learning},
  year      = {2022},
  url       = {https://mlanthology.org/neuripsw/2022/zizzo2022neuripsw-federated/}
}