Phase-Space Learning
Abstract
Existing recurrent net learning algorithms are inadequate. We in(cid:173) troduce the conceptual framework of viewing recurrent training as matching vector fields of dynamical systems in phase space. Phase(cid:173) space reconstruction techniques make the hidden states explicit, reducing temporal learning to a feed-forward problem. In short, we propose viewing iterated prediction [LF88] as the best way of training recurrent networks on deterministic signals. Using this framework, we can train multiple trajectories, insure their stabil(cid:173) ity, and design arbitrary dynamical systems.
Cite
Text
Tsung and Cottrell. "Phase-Space Learning." Neural Information Processing Systems, 1994.Markdown
[Tsung and Cottrell. "Phase-Space Learning." Neural Information Processing Systems, 1994.](https://mlanthology.org/neurips/1994/tsung1994neurips-phasespace/)BibTeX
@inproceedings{tsung1994neurips-phasespace,
title = {{Phase-Space Learning}},
author = {Tsung, Fu-Sheng and Cottrell, Garrison W.},
booktitle = {Neural Information Processing Systems},
year = {1994},
pages = {481-488},
url = {https://mlanthology.org/neurips/1994/tsung1994neurips-phasespace/}
}