Continuously Improving Mobile Manipulation with Autonomous Real-World RL

Abstract

We present a fully autonomous real-world RL framework for mobile manipulation that can learn policies without extensive instrumentation or human supervision. This is enabled by 1) task-relevant autonomy, which guides exploration towards object interactions and prevents stagnation near goal states, 2) efficient policy learning by leveraging basic task knowledge in behavior priors, and 3) formulating generic rewards that combine human-interpretable semantic information with low-level, fine-grained observations. We demonstrate that our approach allows Spot robots to continually improve their performance on a set of four challenging mobile manipulation tasks, obtaining an average success rate of 80% across tasks, a 3-4 times improvement over existing approaches. Videos can be found at https://continual-mobile-manip.github.io/.

Cite

Text

Mendonca et al. "Continuously Improving Mobile Manipulation with Autonomous Real-World RL." Proceedings of The 8th Conference on Robot Learning, 2024.

Markdown

[Mendonca et al. "Continuously Improving Mobile Manipulation with Autonomous Real-World RL." Proceedings of The 8th Conference on Robot Learning, 2024.](https://mlanthology.org/corl/2024/mendonca2024corl-continuously/)

BibTeX

@inproceedings{mendonca2024corl-continuously,
  title     = {{Continuously Improving Mobile Manipulation with Autonomous Real-World RL}},
  author    = {Mendonca, Russell and Panov, Emmanuel and Bucher, Bernadette and Wang, Jiuguang and Pathak, Deepak},
  booktitle = {Proceedings of The 8th Conference on Robot Learning},
  year      = {2024},
  pages     = {5204-5219},
  volume    = {270},
  url       = {https://mlanthology.org/corl/2024/mendonca2024corl-continuously/}
}