Communication Bounds for the Distributed Experts Problem
Abstract
In this work, we study the experts problem in the distributed setting where an expert's cost needs to be aggregated across multiple servers. Our study considers various communication models such as the message-passing model and the broadcast model, along with multiple aggregation functions, such as summing and taking the $\ell_p$ norm of an expert's cost across servers. We propose the first communication-efficient protocols that achieve near-optimal regret in these settings, even against a strong adversary who can choose the inputs adaptively. Additionally, we give a conditional lower bound showing that the communication of our protocols is nearly optimal. Finally, we implement our protocols and demonstrate empirical savings on the HPO-B benchmarks.
Cite
Text
Jia et al. "Communication Bounds for the Distributed Experts Problem." Neural Information Processing Systems, 2024. doi:10.52202/079017-1603Markdown
[Jia et al. "Communication Bounds for the Distributed Experts Problem." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/jia2024neurips-communication/) doi:10.52202/079017-1603BibTeX
@inproceedings{jia2024neurips-communication,
title = {{Communication Bounds for the Distributed Experts Problem}},
author = {Jia, Zhihao and Pang, Qi and Tran, Trung and Woodruff, David and Zhang, Zhihao and Zheng, Wenting},
booktitle = {Neural Information Processing Systems},
year = {2024},
doi = {10.52202/079017-1603},
url = {https://mlanthology.org/neurips/2024/jia2024neurips-communication/}
}