Guided Reinforcement Learning for Robust Multi-Contact Loco-Manipulation
Abstract
Reinforcement learning (RL) has shown remarkable proficiency in developing robust control policies for contact-rich applications. However, it typically requires meticulous Markov Decision Process (MDP) designing tailored to each task and robotic platform. This work addresses this challenge by creating a systematic approach to behavior synthesis and control for multi-contact loco-manipulation. We define a task-independent MDP formulation to learn robust RL policies using a single demonstration (per task) generated from a fast model-based trajectory optimization method. Our framework is validated on diverse real-world tasks, such as navigating spring-loaded doors and manipulating heavy dishwashers. The learned behaviors can handle dynamic uncertainties and external disturbances, showcasing recovery maneuvers, such as re-grasping objects during execution. Finally, we successfully transfer the policies to a real robot, demonstrating the approach’s practical viability.
Cite
Text
Sleiman et al. "Guided Reinforcement Learning for Robust Multi-Contact Loco-Manipulation." Proceedings of The 8th Conference on Robot Learning, 2024.Markdown
[Sleiman et al. "Guided Reinforcement Learning for Robust Multi-Contact Loco-Manipulation." Proceedings of The 8th Conference on Robot Learning, 2024.](https://mlanthology.org/corl/2024/sleiman2024corl-guided/)BibTeX
@inproceedings{sleiman2024corl-guided,
title = {{Guided Reinforcement Learning for Robust Multi-Contact Loco-Manipulation}},
author = {Sleiman, Jean Pierre and Mittal, Mayank and Hutter, Marco},
booktitle = {Proceedings of The 8th Conference on Robot Learning},
year = {2024},
pages = {531-546},
volume = {270},
url = {https://mlanthology.org/corl/2024/sleiman2024corl-guided/}
}