On the Robustness of CountSketch to Adaptive Inputs
Abstract
The last decade saw impressive progress towards understanding the performance of algorithms in adaptive settings, where subsequent inputs may depend on the output from prior inputs. Adaptive settings arise in processes with feedback or with adversarial attacks. Existing designs of robust algorithms are generic wrappers of non-robust counterparts and leave open the possibility of better tailored designs. The lowers bounds (attacks) are similarly worst-case and their significance to practical setting is unclear. Aiming to understand these questions, we study the robustness of \texttt{CountSketch}, a popular dimensionality reduction technique that maps vectors to a lower dimension using randomized linear measurements. The sketch supports recovering $\ell_2$-heavy hitters of a vector (entries with $v[i]^2 \geq \frac{1}{k}\|\boldsymbol{v}\|^2_2$). We show that the classic estimator is not robust, and can be attacked with a number of queries of the order of the sketch size. We propose a robust estimator (for a slightly modified sketch) that allows for quadratic number of queries in the sketch size, which is an improvement factor of $\sqrt{k}$ (for $k$ heavy hitters) over prior "blackbox" approaches.
Cite
Text
Cohen et al. "On the Robustness of CountSketch to Adaptive Inputs." International Conference on Machine Learning, 2022.Markdown
[Cohen et al. "On the Robustness of CountSketch to Adaptive Inputs." International Conference on Machine Learning, 2022.](https://mlanthology.org/icml/2022/cohen2022icml-robustness/)BibTeX
@inproceedings{cohen2022icml-robustness,
title = {{On the Robustness of CountSketch to Adaptive Inputs}},
author = {Cohen, Edith and Lyu, Xin and Nelson, Jelani and Sarlos, Tamas and Shechner, Moshe and Stemmer, Uri},
booktitle = {International Conference on Machine Learning},
year = {2022},
pages = {4112-4140},
volume = {162},
url = {https://mlanthology.org/icml/2022/cohen2022icml-robustness/}
}