Why Do Deep Residual Networks Generalize Better than Deep Feedforward Networks? --- a Neural Tangent Kernel Perspective

Abstract

Deep residual networks (ResNets) have demonstrated better generalization performance than deep feedforward networks (FFNets). However, the theory behind such a phenomenon is still largely unknown. This paper studies this fundamental problem in deep learning from a so-called ``neural tangent kernel'' perspective. Specifically, we first show that under proper conditions, as the width goes to infinity, training deep ResNets can be viewed as learning reproducing kernel functions with some kernel function. We then compare the kernel of deep ResNets with that of deep FFNets and discover that the class of functions induced by the kernel of FFNets is asymptotically not learnable, as the depth goes to infinity. In contrast, the class of functions induced by the kernel of ResNets does not exhibit such degeneracy. Our discovery partially justifies the advantages of deep ResNets over deep FFNets in generalization abilities. Numerical results are provided to support our claim.

Cite

Text

Huang et al. "Why Do Deep Residual Networks Generalize Better than Deep Feedforward Networks? --- a Neural Tangent Kernel Perspective." Neural Information Processing Systems, 2020.

Markdown

[Huang et al. "Why Do Deep Residual Networks Generalize Better than Deep Feedforward Networks? --- a Neural Tangent Kernel Perspective." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/huang2020neurips-deep/)

BibTeX

@inproceedings{huang2020neurips-deep,
  title     = {{Why Do Deep Residual Networks Generalize Better than Deep Feedforward Networks? --- a Neural Tangent Kernel Perspective}},
  author    = {Huang, Kaixuan and Wang, Yuqing and Tao, Molei and Zhao, Tuo},
  booktitle = {Neural Information Processing Systems},
  year      = {2020},
  url       = {https://mlanthology.org/neurips/2020/huang2020neurips-deep/}
}