Towards Sharp Analysis for Distributed Learning with Random Features

Abstract

In recent studies, the generalization properties for distributed learning and random features assumed the existence of the target concept over the hypothesis space. However, this strict condition is not applicable to the more common non-attainable case. In this paper, using refined proof techniques, we first extend the optimal rates for distributed learning with random features to the non-attainable case. Then, we reduce the number of required random features via data-dependent generating strategy, and improve the allowed number of partitions with additional unlabeled data. Theoretical analysis shows these techniques remarkably reduce computational cost while preserving the optimal generalization accuracy under standard assumptions. Finally, we conduct several experiments on both simulated and real-world datasets, and the empirical results validate our theoretical findings.

Cite

Text

Li and Liu. "Towards Sharp Analysis for Distributed Learning with Random Features." International Joint Conference on Artificial Intelligence, 2023. doi:10.24963/IJCAI.2023/436

Markdown

[Li and Liu. "Towards Sharp Analysis for Distributed Learning with Random Features." International Joint Conference on Artificial Intelligence, 2023.](https://mlanthology.org/ijcai/2023/li2023ijcai-sharp/) doi:10.24963/IJCAI.2023/436

BibTeX

@inproceedings{li2023ijcai-sharp,
  title     = {{Towards Sharp Analysis for Distributed Learning with Random Features}},
  author    = {Li, Jian and Liu, Yong},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2023},
  pages     = {3920-3928},
  doi       = {10.24963/IJCAI.2023/436},
  url       = {https://mlanthology.org/ijcai/2023/li2023ijcai-sharp/}
}