Tight Bounds for Collaborative PAC Learning via Multiplicative Weights

Abstract

We study the collaborative PAC learning problem recently proposed in Blum et al.~\cite{BHPQ17}, in which we have $k$ players and they want to learn a target function collaboratively, such that the learned function approximates the target function well on all players' distributions simultaneously. The quality of the collaborative learning algorithm is measured by the ratio between the sample complexity of the algorithm and that of the learning algorithm for a single distribution (called the overhead). We obtain a collaborative learning algorithm with overhead $O(\ln k)$, improving the one with overhead $O(\ln^2 k)$ in \cite{BHPQ17}. We also show that an $\Omega(\ln k)$ overhead is inevitable when $k$ is polynomial bounded by the VC dimension of the hypothesis class. Finally, our experimental study has demonstrated the superiority of our algorithm compared with the one in Blum et al.~\cite{BHPQ17} on real-world datasets.

Cite

Text

Chen et al. "Tight Bounds for Collaborative PAC Learning via Multiplicative Weights." Neural Information Processing Systems, 2018.

Markdown

[Chen et al. "Tight Bounds for Collaborative PAC Learning via Multiplicative Weights." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/chen2018neurips-tight/)

BibTeX

@inproceedings{chen2018neurips-tight,
  title     = {{Tight Bounds for Collaborative PAC Learning via Multiplicative Weights}},
  author    = {Chen, Jiecao and Zhang, Qin and Zhou, Yuan},
  booktitle = {Neural Information Processing Systems},
  year      = {2018},
  pages     = {3598-3607},
  url       = {https://mlanthology.org/neurips/2018/chen2018neurips-tight/}
}