A Generalized Neural Tangent Kernel Analysis for Two-Layer Neural Networks

Abstract

A recent breakthrough in deep learning theory shows that the training of over-parameterized deep neural networks can be characterized by a kernel function called \textit{neural tangent kernel} (NTK). However, it is known that this type of results does not perfectly match the practice, as NTK-based analysis requires the network weights to stay very close to their initialization throughout training, and cannot handle regularizers or gradient noises. In this paper, we provide a generalized neural tangent kernel analysis and show that noisy gradient descent with weight decay can still exhibit a ``kernel-like'' behavior. This implies that the training loss converges linearly up to a certain accuracy. We also establish a novel generalization error bound for two-layer neural networks trained by noisy gradient descent with weight decay.

Cite

Text

Chen et al. "A Generalized Neural Tangent Kernel Analysis for Two-Layer Neural Networks." Neural Information Processing Systems, 2020.

Markdown

[Chen et al. "A Generalized Neural Tangent Kernel Analysis for Two-Layer Neural Networks." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/chen2020neurips-generalized/)

BibTeX

@inproceedings{chen2020neurips-generalized,
  title     = {{A Generalized Neural Tangent Kernel Analysis for Two-Layer Neural Networks}},
  author    = {Chen, Zixiang and Cao, Yuan and Gu, Quanquan and Zhang, Tong},
  booktitle = {Neural Information Processing Systems},
  year      = {2020},
  url       = {https://mlanthology.org/neurips/2020/chen2020neurips-generalized/}
}