Learning Sequential Tasks by Incrementally Adding Higher Orders
Abstract
An incremental, higher-order, non-recurrent network combines two properties found to be useful for learning sequential tasks: higher(cid:173) order connections and incremental introduction of new units. The network adds higher orders when needed by adding new units that dynamically modify connection weights. Since the new units mod(cid:173) ify the weights at the next time-step with information from the previous step, temporal tasks can be learned without the use of feedback, thereby greatly simplifying training. Furthermore, a the(cid:173) oretically unlimited number of units can be added to reach into the arbitrarily distant past. Experiments with the Reber gram(cid:173) mar have demonstrated speedups of two orders of magnitude over recurrent networks.
Cite
Text
Ring. "Learning Sequential Tasks by Incrementally Adding Higher Orders." Neural Information Processing Systems, 1992.Markdown
[Ring. "Learning Sequential Tasks by Incrementally Adding Higher Orders." Neural Information Processing Systems, 1992.](https://mlanthology.org/neurips/1992/ring1992neurips-learning/)BibTeX
@inproceedings{ring1992neurips-learning,
title = {{Learning Sequential Tasks by Incrementally Adding Higher Orders}},
author = {Ring, Mark},
booktitle = {Neural Information Processing Systems},
year = {1992},
pages = {115-122},
url = {https://mlanthology.org/neurips/1992/ring1992neurips-learning/}
}