Optimal Convergence for Distributed Learning with Stochastic Gradient Methods and Spectral Algorithms
Abstract
We study generalization properties of distributed algorithms in the setting of nonparametric regression over a reproducing kernel Hilbert space (RKHS). We first investigate distributed stochastic gradient methods (SGM), with mini-batches and multi-passes over the data. We show that optimal generalization error bounds (up to logarithmic factor) can be retained for distributed SGM provided that the partition level is not too large. We then extend our results to spectral algorithms (SA), including kernel ridge regression (KRR), kernel principal component regression, and gradient methods. Our results show that distributed SGM has a smaller theoretical computational complexity, compared with distributed KRR and classic SGM. Moreover, even for a general non-distributed SA, they provide optimal, capacity-dependent convergence rates, for the case that the regression function may not be in the RKHS in the well-conditioned regimes.
Cite
Text
Lin and Cevher. "Optimal Convergence for Distributed Learning with Stochastic Gradient Methods and Spectral Algorithms." Journal of Machine Learning Research, 2020.Markdown
[Lin and Cevher. "Optimal Convergence for Distributed Learning with Stochastic Gradient Methods and Spectral Algorithms." Journal of Machine Learning Research, 2020.](https://mlanthology.org/jmlr/2020/lin2020jmlr-optimal/)BibTeX
@article{lin2020jmlr-optimal,
title = {{Optimal Convergence for Distributed Learning with Stochastic Gradient Methods and Spectral Algorithms}},
author = {Lin, Junhong and Cevher, Volkan},
journal = {Journal of Machine Learning Research},
year = {2020},
pages = {1-63},
volume = {21},
url = {https://mlanthology.org/jmlr/2020/lin2020jmlr-optimal/}
}