Scaling up Reinforcement Learning for Robot Control
Abstract
The aim of this research is to extend the state of the art of reinforcement learning and enable its applications to complex robot- learning problems. This paper presents a series of scaling-up extensions to reinforcement learning, including: generalization by neural networks, using action models, teaching, hierarchical learning, and having a short-term memory. These extensions have been tested in a physically-realistic robot simulator, and combined to solve a complex robot-learning problem. Simulation results indicate that each of the extensions could result in either significant learning speedup or new capabilities. This research concludes that it is possible to build artificial agents that can acquire complex control policies effectively by reinforcement learning.
Cite
Text
Lin. "Scaling up Reinforcement Learning for Robot Control." International Conference on Machine Learning, 1993. doi:10.1016/B978-1-55860-307-3.50030-7Markdown
[Lin. "Scaling up Reinforcement Learning for Robot Control." International Conference on Machine Learning, 1993.](https://mlanthology.org/icml/1993/lin1993icml-scaling/) doi:10.1016/B978-1-55860-307-3.50030-7BibTeX
@inproceedings{lin1993icml-scaling,
title = {{Scaling up Reinforcement Learning for Robot Control}},
author = {Lin, Long Ji},
booktitle = {International Conference on Machine Learning},
year = {1993},
pages = {182-189},
doi = {10.1016/B978-1-55860-307-3.50030-7},
url = {https://mlanthology.org/icml/1993/lin1993icml-scaling/}
}