Motion Sketch: Acquisition of Visual Motion Guided Behaviors

Abstract

Sensor and motor systems are not separable for autonomous agents to accomplish tasks in a dynamic environment. This paper proposes a method to represent the interaction between a vision-based learning agent and its environment. The method is called "motion sketch" by which a one-eyed mobile robot can learn several behaviors such as obstacle avoidance and target pursuit. A motion sketch is a collection of visual motion cues detected by a group of visual tracking routines of which visual behaviors are determined by individual tasks, and is tightly coupled with motor behaviors which are obtained by Q-learning, a most widely used reinforcement learning method, based on the visual motion cues. In order for the motion sketch to work, first the fundamental relationship between visual motions and motor commands is obtained, and then the Q-learning is applied to obtain the set of motor commands tightly coupled with the motion cues. Finally, the experimental results of real robot implementation w...

Cite

Text

Nakamura and Asada. "Motion Sketch: Acquisition of Visual Motion Guided Behaviors." International Joint Conference on Artificial Intelligence, 1995.

Markdown

[Nakamura and Asada. "Motion Sketch: Acquisition of Visual Motion Guided Behaviors." International Joint Conference on Artificial Intelligence, 1995.](https://mlanthology.org/ijcai/1995/nakamura1995ijcai-motion/)

BibTeX

@inproceedings{nakamura1995ijcai-motion,
  title     = {{Motion Sketch: Acquisition of Visual Motion Guided Behaviors}},
  author    = {Nakamura, Takayuki and Asada, Minoru},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {1995},
  pages     = {126-132},
  url       = {https://mlanthology.org/ijcai/1995/nakamura1995ijcai-motion/}
}