Replicable Clustering
Abstract
We design replicable algorithms in the context of statistical clustering under the recently introduced notion of replicability from Impagliazzo et al. [2022]. According to this definition, a clustering algorithm is replicable if, with high probability, its output induces the exact same partition of the sample space after two executions on different inputs drawn from the same distribution, when its internal randomness is shared across the executions. We propose such algorithms for the statistical $k$-medians, statistical $k$-means, and statistical $k$-centers problems by utilizing approximation routines for their combinatorial counterparts in a black-box manner. In particular, we demonstrate a replicable $O(1)$-approximation algorithm for statistical Euclidean $k$-medians ($k$-means) with $\operatorname{poly}(d)$ sample complexity. We also describe an $O(1)$-approximation algorithm with an additional $O(1)$-additive error for statistical Euclidean $k$-centers, albeit with $\exp(d)$ sample complexity. In addition, we provide experiments on synthetic distributions in 2D using the $k$-means++ implementation from sklearn as a black-box that validate our theoretical results.
Cite
Text
Esfandiari et al. "Replicable Clustering." Neural Information Processing Systems, 2023.Markdown
[Esfandiari et al. "Replicable Clustering." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/esfandiari2023neurips-replicable/)BibTeX
@inproceedings{esfandiari2023neurips-replicable,
title = {{Replicable Clustering}},
author = {Esfandiari, Hossein and Karbasi, Amin and Mirrokni, Vahab and Velegkas, Grigoris and Zhou, Felix},
booktitle = {Neural Information Processing Systems},
year = {2023},
url = {https://mlanthology.org/neurips/2023/esfandiari2023neurips-replicable/}
}