End-to-End Differentiable Model of Robot-Terrain Interactions
Abstract
We propose a differentiable model of robot-terrain interactions that delivers the expected robot trajectory given an onboard camera image and the robot control. The model is trained on a real dataset that covers various terrains ranging from vegetation to man-made obstacles. Since robot-endangering interactions are naturally absent in real-world training data, the consequent learning of the model suffers from training/testing distribution mismatch, and the quality of the result strongly depends on generalization of the model. Consequently, we propose a~grey-box, explainable, physics-aware, and end-to-end differentiable model that achieves better generalization through strong geometrical and physical priors. Our model, which functions as an image-conditioned differentiable simulation, can generate millions of trajectories per second and provides interpretable intermediate outputs that enable efficient self-supervision. Our experimental evaluation demonstrates that the model outperforms state-of-the-art methods.
Cite
Text
Agishev et al. "End-to-End Differentiable Model of Robot-Terrain Interactions." ICML 2024 Workshops: Differentiable_Almost_Everything, 2024.Markdown
[Agishev et al. "End-to-End Differentiable Model of Robot-Terrain Interactions." ICML 2024 Workshops: Differentiable_Almost_Everything, 2024.](https://mlanthology.org/icmlw/2024/agishev2024icmlw-endtoend/)BibTeX
@inproceedings{agishev2024icmlw-endtoend,
title = {{End-to-End Differentiable Model of Robot-Terrain Interactions}},
author = {Agishev, Ruslan and Kubelka, Vladimír and Pecka, Martin and Svoboda, Tomas and Zimmermann, Karel},
booktitle = {ICML 2024 Workshops: Differentiable_Almost_Everything},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/agishev2024icmlw-endtoend/}
}