A Practice Strategy for Robot Learning Control
Abstract
"Trajectory Extension Learning" is a new technique for Learning Control in Robots which assumes that there exists some parameter of the desired trajectory that can be smoothly varied from a region of easy solvability of the dynamics to a region of desired behavior which may have more difficult dynamics. By gradually varying the parameter, practice movements remain near the desired path while a Neural Network learns to approximate the inverse dynamics. For example, the average speed of motion might be varied, and the in(cid:173) verse dynamics can be "bootstrapped" from slow movements with simpler dynamics to fast movements. This provides an example of the more general concept of a "Practice Strategy" in which a se(cid:173) quence of intermediate tasks is used to simplify learning a complex task. I show an example of the application of this idea to a real 2-joint direct drive robot arm.
Cite
Text
Sanger. "A Practice Strategy for Robot Learning Control." Neural Information Processing Systems, 1992.Markdown
[Sanger. "A Practice Strategy for Robot Learning Control." Neural Information Processing Systems, 1992.](https://mlanthology.org/neurips/1992/sanger1992neurips-practice/)BibTeX
@inproceedings{sanger1992neurips-practice,
title = {{A Practice Strategy for Robot Learning Control}},
author = {Sanger, Terence D.},
booktitle = {Neural Information Processing Systems},
year = {1992},
pages = {335-341},
url = {https://mlanthology.org/neurips/1992/sanger1992neurips-practice/}
}