Collaborative PAC Learning
Abstract
We introduce a collaborative PAC learning model, in which k players attempt to learn the same underlying concept. We ask how much more information is required to learn an accurate classifier for all players simultaneously. We refer to the ratio between the sample complexity of collaborative PAC learning and its non-collaborative (single-player) counterpart as the overhead. We design learning algorithms with O(ln(k)) and O(ln^2(k)) overhead in the personalized and centralized variants our model. This gives an exponential improvement upon the naive algorithm that does not share information among players. We complement our upper bounds with an Omega(ln(k)) overhead lower bound, showing that our results are tight up to a logarithmic factor.
Cite
Text
Blum et al. "Collaborative PAC Learning." Neural Information Processing Systems, 2017.Markdown
[Blum et al. "Collaborative PAC Learning." Neural Information Processing Systems, 2017.](https://mlanthology.org/neurips/2017/blum2017neurips-collaborative/)BibTeX
@inproceedings{blum2017neurips-collaborative,
title = {{Collaborative PAC Learning}},
author = {Blum, Avrim and Haghtalab, Nika and Procaccia, Ariel D and Qiao, Mingda},
booktitle = {Neural Information Processing Systems},
year = {2017},
pages = {2392-2401},
url = {https://mlanthology.org/neurips/2017/blum2017neurips-collaborative/}
}