Learning to Avoid Obstacles Through Reinforcement
Abstract
Motion planning and control involve mainly symbolic and subsymbolic processing, respectively, as does the learning of these capabilities. This paper focusses on a motion control aspect, namely, the learning of obstacle-avoidance abilities. We present a reinforcement-based connectionist system able to find and learn obstacle-avoiding paths for a mobile robot in a non-maze-like 2D environment. In the conclusions section, some directions on how to interface the subsymbolic system developed with a symbolic path planner are provided.
Cite
Text
del R. Millán and Torras. "Learning to Avoid Obstacles Through Reinforcement." International Conference on Machine Learning, 1991. doi:10.1016/B978-1-55860-200-7.50062-3Markdown
[del R. Millán and Torras. "Learning to Avoid Obstacles Through Reinforcement." International Conference on Machine Learning, 1991.](https://mlanthology.org/icml/1991/delrmillan1991icml-learning/) doi:10.1016/B978-1-55860-200-7.50062-3BibTeX
@inproceedings{delrmillan1991icml-learning,
title = {{Learning to Avoid Obstacles Through Reinforcement}},
author = {del R. Millán, José and Torras, Carme},
booktitle = {International Conference on Machine Learning},
year = {1991},
pages = {298-302},
doi = {10.1016/B978-1-55860-200-7.50062-3},
url = {https://mlanthology.org/icml/1991/delrmillan1991icml-learning/}
}