Do Outliers Ruin Collaboration?
Abstract
We consider the problem of learning a binary classifier from $n$ different data sources, among which at most an $\eta$ fraction are adversarial. The overhead is defined as the ratio between the sample complexity of learning in this setting and that of learning the same hypothesis class on a single data distribution. We present an algorithm that achieves an $O(\eta n + \ln n)$ overhead, which is proved to be worst-case optimal. We also discuss the potential challenges to the design of a computationally efficient learning algorithm with a small overhead.
Cite
Text
Qiao. "Do Outliers Ruin Collaboration?." International Conference on Machine Learning, 2018.Markdown
[Qiao. "Do Outliers Ruin Collaboration?." International Conference on Machine Learning, 2018.](https://mlanthology.org/icml/2018/qiao2018icml-outliers/)BibTeX
@inproceedings{qiao2018icml-outliers,
title = {{Do Outliers Ruin Collaboration?}},
author = {Qiao, Mingda},
booktitle = {International Conference on Machine Learning},
year = {2018},
pages = {4180-4187},
volume = {80},
url = {https://mlanthology.org/icml/2018/qiao2018icml-outliers/}
}