Motion Perception in Reinforcement Learning with Dynamic Objects

Abstract

In dynamic environments, learned controllers are supposed to take motion into account when selecting the action to be taken. However, in existing reinforcement learning works motion is rarely treated explicitly; it is rather assumed that the controller learns the necessary motion representation from temporal stacks of frames implicitly. In this paper, we show that for continuous control tasks learning an explicit representation of motion improves the quality of the learned controller in dynamic scenarios. We demonstrate this on common benchmark tasks (Walker, Swimmer, Hopper), on target reaching and ball catching tasks with simulated robotic arms, and on a dynamic single ball juggling task. Moreover, we find that when equipped with an appropriate network architecture, the agent can, on some tasks, learn motion features also with pure reinforcement learning, without additional supervision. Further we find that using an image difference between the current and the previous frame as an additional input leads to better results than a temporal stack of frames.

Cite

Text

Amiranashvili et al. "Motion Perception in Reinforcement Learning with Dynamic Objects." Conference on Robot Learning, 2018.

Markdown

[Amiranashvili et al. "Motion Perception in Reinforcement Learning with Dynamic Objects." Conference on Robot Learning, 2018.](https://mlanthology.org/corl/2018/amiranashvili2018corl-motion/)

BibTeX

@inproceedings{amiranashvili2018corl-motion,
  title     = {{Motion Perception in Reinforcement Learning with Dynamic Objects}},
  author    = {Amiranashvili, Artemij and Dosovitskiy, Alexey and Koltun, Vladlen and Brox, Thomas},
  booktitle = {Conference on Robot Learning},
  year      = {2018},
  pages     = {156-168},
  url       = {https://mlanthology.org/corl/2018/amiranashvili2018corl-motion/}
}