Generalized Leverage Score Sampling for Neural Networks

Abstract

Leverage score sampling is a powerful technique that originates from theoretical computer science, which can be used to speed up a large number of fundamental questions, e.g. linear regression, linear programming, semi-definite programming, cutting plane method, graph sparsification, maximum matching and max-flow. Recently, it has been shown that leverage score sampling helps to accelerate kernel methods [Avron, Kapralov, Musco, Musco, Velingker and Zandieh 17]. In this work, we generalize the results in [Avron, Kapralov, Musco, Musco, Velingker and Zandieh 17] to a broader class of kernels. We further bring the leverage score sampling into the field of deep learning theory. 1. We show the connection between the initialization for neural network training and approximating the neural tangent kernel with random features. 2. We prove the equivalence between regularized neural network and neural tangent kernel ridge regression under the initialization of both classical random Gaussian and leverage score sampling.

Cite

Text

Lee et al. "Generalized Leverage Score Sampling for Neural Networks." Neural Information Processing Systems, 2020.

Markdown

[Lee et al. "Generalized Leverage Score Sampling for Neural Networks." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/lee2020neurips-generalized/)

BibTeX

@inproceedings{lee2020neurips-generalized,
  title     = {{Generalized Leverage Score Sampling for Neural Networks}},
  author    = {Lee, Jason and Shen, Ruoqi and Song, Zhao and Wang, Mengdi and Yu, Zheng},
  booktitle = {Neural Information Processing Systems},
  year      = {2020},
  url       = {https://mlanthology.org/neurips/2020/lee2020neurips-generalized/}
}