Differentially Private Heavy Hitters Using Federated Analytics
Abstract
We study practical heuristics to improve the performance of prefix-tree based algorithms for differentially private heavy hitter detection. Our model assumes each user has multiple data points and the goal is to learn as many of the most frequent data points as possible across all users' data with aggregate and local differential privacy. We propose an adaptive hyperparameter tuning algorithm that improves the performance of the algorithm while satisfying computational, communication and aggregate privacy constraints. We explore the impact of different data-selection schemes as well as the impact of introducing deny lists during multiple runs of the algorithm. We test these improvements using extensive experimentation on the Reddit dataset on the task of learning most frequent words.
Cite
Text
Chadha et al. "Differentially Private Heavy Hitters Using Federated Analytics." ICML 2023 Workshops: FL, 2023.Markdown
[Chadha et al. "Differentially Private Heavy Hitters Using Federated Analytics." ICML 2023 Workshops: FL, 2023.](https://mlanthology.org/icmlw/2023/chadha2023icmlw-differentially/)BibTeX
@inproceedings{chadha2023icmlw-differentially,
title = {{Differentially Private Heavy Hitters Using Federated Analytics}},
author = {Chadha, Karan and Chen, Junye and Duchi, John and Feldman, Vitaly and Hashemi, Hanieh and Javidbakht, Omid and McMillan, Audra and Talwar, Kunal},
booktitle = {ICML 2023 Workshops: FL},
year = {2023},
url = {https://mlanthology.org/icmlw/2023/chadha2023icmlw-differentially/}
}