Beyond Secure Aggregation: Scalable Multi-Round Secure Collaborative Learning
Abstract
Privacy-preserving machine learning (PPML) has achieved exciting breakthroughs for secure collaborative training of machine learning models under formal information-theoretic privacy guarantees. Despite the recent advances, communication bottleneck still remains as a major challenge against scalability to large neural networks. To address this challenge, in this work we introduce the first end-to-end multi-round multi-party neural network training framework with linear communication complexity, under formal information-theoretic privacy guarantees. Our key contribution is a scalable secure computing mechanism for iterative polynomial operations, which incurs only linear communication overhead, significantly improving over the quadratic state-of-the-art, while providing formal end-to-end multi-round information-theoretic privacy guarantees. In doing so, our framework achieves equal adversary tolerance, resilience to user dropouts, and model accuracy as the state-of-the-art, while addressing a key challenge in scalable training.
Cite
Text
Basaran et al. "Beyond Secure Aggregation: Scalable Multi-Round Secure Collaborative Learning." ICML 2023 Workshops: FL, 2023.Markdown
[Basaran et al. "Beyond Secure Aggregation: Scalable Multi-Round Secure Collaborative Learning." ICML 2023 Workshops: FL, 2023.](https://mlanthology.org/icmlw/2023/basaran2023icmlw-beyond/)BibTeX
@inproceedings{basaran2023icmlw-beyond,
title = {{Beyond Secure Aggregation: Scalable Multi-Round Secure Collaborative Learning}},
author = {Basaran, Umit Yigit and Lu, Xingyu and Guler, Basak},
booktitle = {ICML 2023 Workshops: FL},
year = {2023},
url = {https://mlanthology.org/icmlw/2023/basaran2023icmlw-beyond/}
}