AIMHI: Protecting Sensitive Data Through Federated Co-Training

Abstract

Federated learning offers collaborative training among distributed sites without sharing sensitive local information by sharing the sites' model parameters. It is possible, though, to make non-trivial inferences about sensitive local information from these model parameters. We propose a novel co-training technique called AIMHI that uses a public unlabeled dataset to exchange information between sites by sharing predictions on that dataset. This setting is particularly suitable to healthcare, where hospitals and clinics hold small labeled datasets with highly sensitive patient data and large national health databases contain large amounts of public patient data. We show that the proposed method reaches a model quality comparable to federated learning while maintaining privacy to high degree.

Cite

Text

Abourayya et al. "AIMHI: Protecting Sensitive Data Through Federated Co-Training." NeurIPS 2022 Workshops: Federated_Learning, 2022.

Markdown

[Abourayya et al. "AIMHI: Protecting Sensitive Data Through Federated Co-Training." NeurIPS 2022 Workshops: Federated_Learning, 2022.](https://mlanthology.org/neuripsw/2022/abourayya2022neuripsw-aimhi/)

BibTeX

@inproceedings{abourayya2022neuripsw-aimhi,
  title     = {{AIMHI: Protecting Sensitive Data Through Federated Co-Training}},
  author    = {Abourayya, Amr and Kamp, Michael and Ayday, Erman and Kleesiek, Jens and Rao, Kanishka and Webb, Geoffrey I. and Rao, Bharat},
  booktitle = {NeurIPS 2022 Workshops: Federated_Learning},
  year      = {2022},
  url       = {https://mlanthology.org/neuripsw/2022/abourayya2022neuripsw-aimhi/}
}