Revisiting Differentially Private Algorithms for Decentralized Online Learning
Abstract
Although the differential privacy (DP) of decentralized online learning has garnered considerable attention recently, existing algorithms are unsatisfactory due to their inability to achieve $(\epsilon, 0)$-DP over all $T$ rounds, recover the optimal regret in the non-private case, and maintain the lightweight computation under complex constraints. To address these issues, we first propose a new decentralized online learning algorithm satisfying $(\epsilon, 0)$-DP over $T$ rounds, and show that it can achieve $\widetilde{O}(n(\rho^{-1/4}+\epsilon^{-1}\rho^{1/4})\sqrt{T})$ and $\widetilde{O}(n (\rho^{-1/2}+\epsilon^{-1}))$ regret bounds for convex and strongly convex functions respectively, where $n$ is the number of local learners and $\rho$ is the spectral gap of the communication matrix. As long as $\epsilon=\Omega(\sqrt{\rho})$, these bounds nearly match existing lower bounds in the non-private case, which implies that $(\epsilon, 0)$-DP of decentralized online learning may be ensured nearly for free. Our key idea is to design a block-decoupled accelerated gossip strategy that can be incorporated with the classical tree-based private aggregation, and also enjoys a faster average consensus among local learners. Furthermore, we develop a projection-free variant of our algorithm to keep the efficiency under complex constraints. As a trade-off, the above regret bounds degrade to $\widetilde{O}(n(T^{3/4}+\epsilon^{-1}T^{1/4}))$ and $\widetilde{O}(n(T^{2/3}+\epsilon^{-1}))$ respectively, which however are even better than the existing private centralized projection-free online algorithm.
Cite
Text
Wang et al. "Revisiting Differentially Private Algorithms for Decentralized Online Learning." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Wang et al. "Revisiting Differentially Private Algorithms for Decentralized Online Learning." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/wang2025icml-revisiting/)BibTeX
@inproceedings{wang2025icml-revisiting,
title = {{Revisiting Differentially Private Algorithms for Decentralized Online Learning}},
author = {Wang, Xiaoyu and Yang, Wenhao and Yao, Chang and Song, Mingli and Wan, Yuanyu},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {65213-65235},
volume = {267},
url = {https://mlanthology.org/icml/2025/wang2025icml-revisiting/}
}