Learning Mackey-Glass from 25 Examples, Plus or Minus 2
Abstract
We apply active exemplar selection (Plutowski &. White, 1991; 1993) to predicting a chaotic time series. Given a fixed set of ex(cid:173) amples, the method chooses a concise subset for training. Fitting these exemplars results in the entire set being fit as well as de(cid:173) sired. The algorithm incorporates a method for regulating network complexity, automatically adding exempla.rs and hidden units as needed. Fitting examples generated from the Mackey-Glass equa(cid:173) tion with fractal dimension 2.1 to an rmse of 0.01 required about 25 exemplars and 3 to 6 hidden units. The method requires an order of magnitude fewer floating point operations than training on the entire set of examples, is significantly cheaper than two contend(cid:173) ing exemplar selection techniques, and suggests a simpler active selection technique that performs comparably.
Cite
Text
Plutowski et al. "Learning Mackey-Glass from 25 Examples, Plus or Minus 2." Neural Information Processing Systems, 1993.Markdown
[Plutowski et al. "Learning Mackey-Glass from 25 Examples, Plus or Minus 2." Neural Information Processing Systems, 1993.](https://mlanthology.org/neurips/1993/plutowski1993neurips-learning/)BibTeX
@inproceedings{plutowski1993neurips-learning,
title = {{Learning Mackey-Glass from 25 Examples, Plus or Minus 2}},
author = {Plutowski, Mark and Cottrell, Garrison and White, Halbert},
booktitle = {Neural Information Processing Systems},
year = {1993},
pages = {1135-1142},
url = {https://mlanthology.org/neurips/1993/plutowski1993neurips-learning/}
}