A Reinforcement Connectionist Approach to Robot Path Finding in Non-Maze-like Environments

Abstract

This paper presents a reinforcement connectionist system which finds and learns the suitable situation-action rules so as to generate feasible paths for a point robot in a 2D environment with circular obstacles. The basic reinforcement algorithm is extended with a strategy for discovering stable solution paths. Equipped with this strategy and a powerful codification scheme, the path-finder (i) learns quickly, (ii) deals with continuous-valued inputs and outputs, (iii) exhibits good noise-tolerance and generalization capabilities, (iv) copes with dynamic environments, and (v) solves an instance of the path finding problem with strong performance demands.

Cite

Text

del R. Millán and Torras. "A Reinforcement Connectionist Approach to Robot Path Finding in Non-Maze-like Environments." Machine Learning, 1992. doi:10.1007/BF00992702

Markdown

[del R. Millán and Torras. "A Reinforcement Connectionist Approach to Robot Path Finding in Non-Maze-like Environments." Machine Learning, 1992.](https://mlanthology.org/mlj/1992/delrmillan1992mlj-reinforcement/) doi:10.1007/BF00992702

BibTeX

@article{delrmillan1992mlj-reinforcement,
  title     = {{A Reinforcement Connectionist Approach to Robot Path Finding in Non-Maze-like Environments}},
  author    = {del R. Millán, José and Torras, Carme},
  journal   = {Machine Learning},
  year      = {1992},
  pages     = {363-395},
  doi       = {10.1007/BF00992702},
  volume    = {8},
  url       = {https://mlanthology.org/mlj/1992/delrmillan1992mlj-reinforcement/}
}