Approximate Cluster Recovery from Noisy Labels
Abstract
Designing algorithms for machine learning problems targeting beyond worst-case analysis and, in particular, analyzing the effect of side-information on the complexity of such problems is a very important line of research with many practical applications. In this paper we study the classic k-means clustering problem in the presence of noisy labels. In this problem, in addition to a set of points and parameter \(k\), we receive cluster labels of each point generated by either an adversarial or a random perturbation of the optimal solution. Our main goal is to formally study the effect of this extra information on the complexity of the k-means problem. In particular, in the context of random perturbations, we give an efficient algorithm that finds a clustering of cost within a factor $1+o(1)$ of the optimum even when the label of each point is perturbed with a large probability (think 99%). In contrast, we show that the side-information with adversarial perturbations is as hard as the original problem even if only a small $\epsilon$ fraction of the labels are perturbed. We complement this negative result by giving a simple algorithm in the case when the adversary is only allowed to perturb an $\epsilon$ fraction of the labels per \emph{each cluster}.
Cite
Text
Gamlath et al. "Approximate Cluster Recovery from Noisy Labels." Conference on Learning Theory, 2022.Markdown
[Gamlath et al. "Approximate Cluster Recovery from Noisy Labels." Conference on Learning Theory, 2022.](https://mlanthology.org/colt/2022/gamlath2022colt-approximate/)BibTeX
@inproceedings{gamlath2022colt-approximate,
title = {{Approximate Cluster Recovery from Noisy Labels}},
author = {Gamlath, Buddhima and Lattanzi, Silvio and Norouzi-Fard, Ashkan and Svensson, Ola},
booktitle = {Conference on Learning Theory},
year = {2022},
pages = {1463-1509},
volume = {178},
url = {https://mlanthology.org/colt/2022/gamlath2022colt-approximate/}
}