Training Recurrent Neural Networks Online by Learning Explicit State Variables

Abstract

Recurrent neural networks (RNNs) allow an agent to construct a state-representation from a stream of experience, which is essential in partially observable problems. However, there are two primary issues one must overcome when training an RNN: the sensitivity of the learning algorithm's performance to truncation length and and long training times. There are variety of strategies to improve training in RNNs, the mostly notably Backprop Through Time (BPTT) and by Real-Time Recurrent Learning. These strategies, however, are typically computationally expensive and focus computation on computing gradients back in time. In this work, we reformulate the RNN training objective to explicitly learn state vectors; this breaks the dependence across time and so avoids the need to estimate gradients far back in time. We show that for a fixed buffer of data, our algorithm---called Fixed Point Propagation (FPP)---is sound: it converges to a stationary point of the new objective. We investigate the empirical performance of our online FPP algorithm, particularly in terms of computation compared to truncated BPTT with varying truncation levels.

Cite

Text

Nath et al. "Training Recurrent Neural Networks Online by Learning Explicit State Variables." International Conference on Learning Representations, 2020.

Markdown

[Nath et al. "Training Recurrent Neural Networks Online by Learning Explicit State Variables." International Conference on Learning Representations, 2020.](https://mlanthology.org/iclr/2020/nath2020iclr-training/)

BibTeX

@inproceedings{nath2020iclr-training,
  title     = {{Training Recurrent Neural Networks Online by Learning Explicit State Variables}},
  author    = {Nath, Somjit and Liu, Vincent and Chan, Alan and Li, Xin and White, Adam and White, Martha},
  booktitle = {International Conference on Learning Representations},
  year      = {2020},
  url       = {https://mlanthology.org/iclr/2020/nath2020iclr-training/}
}