Can SGD Learn Recurrent Neural Networks with Provable Generalization?

Abstract

Recurrent Neural Networks (RNNs) are among the most popular models in sequential data analysis. Yet, in the foundational PAC learning language, what concept class can it learn? Moreover, how can the same recurrent unit simultaneously learn functions from different input tokens to different output tokens, without affecting each other? Existing generalization bounds for RNN scale exponentially with the input length, significantly limiting their practical implications. In this paper, we show using the vanilla stochastic gradient descent (SGD), RNN can actually learn some notable concept class \emph{efficiently}, meaning that both time and sample complexity scale \emph{polynomially} in the input length (or almost polynomially, depending on the concept). This concept class at least includes functions where each output token is generated from inputs of earlier tokens using a smooth two-layer neural network.

Cite

Text

Allen-Zhu and Li. "Can SGD Learn Recurrent Neural Networks with Provable Generalization?." Neural Information Processing Systems, 2019.

Markdown

[Allen-Zhu and Li. "Can SGD Learn Recurrent Neural Networks with Provable Generalization?." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/allenzhu2019neurips-sgd/)

BibTeX

@inproceedings{allenzhu2019neurips-sgd,
  title     = {{Can SGD Learn Recurrent Neural Networks with Provable Generalization?}},
  author    = {Allen-Zhu, Zeyuan and Li, Yuanzhi},
  booktitle = {Neural Information Processing Systems},
  year      = {2019},
  pages     = {10331-10341},
  url       = {https://mlanthology.org/neurips/2019/allenzhu2019neurips-sgd/}
}