A Practical Algorithm for Distributed Clustering and Outlier Detection
Abstract
We study the classic k-means/median clustering, which are fundamental problems in unsupervised learning, in the setting where data are partitioned across multiple sites, and where we are allowed to discard a small portion of the data by labeling them as outliers. We propose a simple approach based on constructing small summary for the original dataset. The proposed method is time and communication efficient, has good approximation guarantees, and can identify the global outliers effectively. To the best of our knowledge, this is the first practical algorithm with theoretical guarantees for distributed clustering with outliers. Our experiments on both real and synthetic data have demonstrated the clear superiority of our algorithm against all the baseline algorithms in almost all metrics.
Cite
Text
Chen et al. "A Practical Algorithm for Distributed Clustering and Outlier Detection." Neural Information Processing Systems, 2018.Markdown
[Chen et al. "A Practical Algorithm for Distributed Clustering and Outlier Detection." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/chen2018neurips-practical/)BibTeX
@inproceedings{chen2018neurips-practical,
title = {{A Practical Algorithm for Distributed Clustering and Outlier Detection}},
author = {Chen, Jiecao and Azer, Erfan Sadeqi and Zhang, Qin},
booktitle = {Neural Information Processing Systems},
year = {2018},
pages = {2248-2256},
url = {https://mlanthology.org/neurips/2018/chen2018neurips-practical/}
}