DextrAH-G: Pixels-to-Action Dexterous Arm-Hand Grasping with Geometric Fabrics
Abstract
A pivotal challenge in robotics is achieving fast, safe, and robust dexterous grasping across a diverse range of objects, an important goal within industrial applications. However, existing methods often have very limited speed, dexterity, and generality, along with limited or no hardware safety guarantees. In this work, we introduce DextrAH-G, a depth-based dexterous grasping policy trained entirely in simulation that combines reinforcement learning, geometric fabrics, and teacher-student distillation. We address key challenges in joint arm-hand policy learning, such as high-dimensional observation and action spaces, the sim2real gap, collision avoidance, and hardware constraints. DextrAH-G enables a 23 motor arm-hand robot to safely and continuously grasp and transport a large variety of objects at high speed using multi-modal inputs including depth images, allowing generalization across object geometry. Videos at https://sites.google.com/view/dextrah-g.
Cite
Text
Lum et al. "DextrAH-G: Pixels-to-Action Dexterous Arm-Hand Grasping with Geometric Fabrics." Proceedings of The 8th Conference on Robot Learning, 2024.Markdown
[Lum et al. "DextrAH-G: Pixels-to-Action Dexterous Arm-Hand Grasping with Geometric Fabrics." Proceedings of The 8th Conference on Robot Learning, 2024.](https://mlanthology.org/corl/2024/lum2024corl-dextrahg/)BibTeX
@inproceedings{lum2024corl-dextrahg,
title = {{DextrAH-G: Pixels-to-Action Dexterous Arm-Hand Grasping with Geometric Fabrics}},
author = {Lum, Tyler Ga Wei and Matak, Martin and Makoviychuk, Viktor and Handa, Ankur and Allshire, Arthur and Hermans, Tucker and Ratliff, Nathan D. and Van Wyk, Karl},
booktitle = {Proceedings of The 8th Conference on Robot Learning},
year = {2024},
pages = {3182-3211},
volume = {270},
url = {https://mlanthology.org/corl/2024/lum2024corl-dextrahg/}
}