Stable Recurrent Models

Abstract

Stability is a fundamental property of dynamical systems, yet to this date it has had little bearing on the practice of recurrent neural networks. In this work, we conduct a thorough investigation of stable recurrent models. Theoretically, we prove stable recurrent neural networks are well approximated by feed-forward networks for the purpose of both inference and training by gradient descent. Empirically, we demonstrate stable recurrent models often perform as well as their unstable counterparts on benchmark sequence tasks. Taken together, these findings shed light on the effective power of recurrent networks and suggest much of sequence learning happens, or can be made to happen, in the stable regime. Moreover, our results help to explain why in many cases practitioners succeed in replacing recurrent models by feed-forward models.

Cite

Text

Miller and Hardt. "Stable Recurrent Models." International Conference on Learning Representations, 2019.

Markdown

[Miller and Hardt. "Stable Recurrent Models." International Conference on Learning Representations, 2019.](https://mlanthology.org/iclr/2019/miller2019iclr-stable/)

BibTeX

@inproceedings{miller2019iclr-stable,
  title     = {{Stable Recurrent Models}},
  author    = {Miller, John and Hardt, Moritz},
  booktitle = {International Conference on Learning Representations},
  year      = {2019},
  url       = {https://mlanthology.org/iclr/2019/miller2019iclr-stable/}
}