Motion Policy Networks

Abstract

Collision-free motion generation in unknown environments is a core building block for robot manipulation. Generating such motions is challenging due to multiple objectives; not only should the solutions be optimal, the motion generator itself must be fast enough for real-time performance and reliable enough for practical deployment. A wide variety of methods have been proposed ranging from local controllers to global planners, often being combined to offset their shortcomings. We present an end-to-end neural model called Motion Policy Networks (M$\pi$Nets) to generate collision-free, smooth motion from just a single depth camera observation. M$\pi$Nets are trained on over 3 million motion planning problems in more than 500,000 environments. Our experiments show that M$\pi$Nets are significantly faster than global planners while exhibiting the reactivity needed to deal with dynamic scenes. They are 46% better than prior neural planners and more robust than local control policies. Despite being only trained in simulation, M$\pi$Nets transfer well to the real robot with noisy partial point clouds. Videos and code are available at https://mpinets.github.io

Cite

Text

Fishman et al. "Motion Policy Networks." Conference on Robot Learning, 2022.

Markdown

[Fishman et al. "Motion Policy Networks." Conference on Robot Learning, 2022.](https://mlanthology.org/corl/2022/fishman2022corl-motion/)

BibTeX

@inproceedings{fishman2022corl-motion,
  title     = {{Motion Policy Networks}},
  author    = {Fishman, Adam and Murali, Adithyavairavan and Eppner, Clemens and Peele, Bryan and Boots, Byron and Fox, Dieter},
  booktitle = {Conference on Robot Learning},
  year      = {2022},
  pages     = {967-977},
  volume    = {205},
  url       = {https://mlanthology.org/corl/2022/fishman2022corl-motion/}
}