On the Growth of Mistakes in Differentially Private Online Learning: A Lower Bound Perspective
Abstract
In this paper, we provide lower bounds for Differentially Private (DP) Online Learning algorithms. Our result shows that, for a broad class of $(\epsilon,\delta)$-DP online algorithms, for number of rounds $T$ such that $\log T\leq O\left(1 / \delta\right)$, the expected number of mistakes incurred by the algorithm grows as \(\Omega\left(\log T\right)\). This matches the upper bound obtained by Golowich and Livni (2021) and is in contrast to non-private online learning where the number of mistakes is independent of \(T\). To the best of our knowledge, our work is the first result towards settling lower bounds for DP–Online learning and partially addresses the open question in Sanyal and Ramponi (2022).
Cite
Text
Dmitriev et al. "On the Growth of Mistakes in Differentially Private Online Learning: A Lower Bound Perspective." Conference on Learning Theory, 2024.Markdown
[Dmitriev et al. "On the Growth of Mistakes in Differentially Private Online Learning: A Lower Bound Perspective." Conference on Learning Theory, 2024.](https://mlanthology.org/colt/2024/dmitriev2024colt-growth/)BibTeX
@inproceedings{dmitriev2024colt-growth,
title = {{On the Growth of Mistakes in Differentially Private Online Learning: A Lower Bound Perspective}},
author = {Dmitriev, Daniil and Szabó, Kristóf and Sanyal, Amartya},
booktitle = {Conference on Learning Theory},
year = {2024},
pages = {1379-1398},
volume = {247},
url = {https://mlanthology.org/colt/2024/dmitriev2024colt-growth/}
}