STReSSD: Sim-to-Real from Sound for Stochastic Dynamics

Abstract

Sound is an information-rich medium that captures dynamic physical events. This work presents STReSSD, a framework that uses sound to bridge the simulation-to-reality gap for stochastic dynamics, demonstrated for the canonical case of a bouncing ball. A physically-motivated noise model is presented to capture stochastic behavior of the balls upon collision with the environment. A likelihood-free Bayesian inference framework is used to infer the parameters of the noise model, as well as a material property called the coefficient of restitution, from audio observations. The same inference framework and the calibrated stochastic simulator are then used to learn a probabilistic model of ball dynamics. The predictive capabilities of the dynamics model are tested in two robotic experiments. First, open-loop predictions anticipate probabilistic success of bouncing a ball into a cup. The second experiment integrates audio perception with a robotic arm to track and deflect a bouncing ball in real-time. We envision that this work is a step towards integrating audio-based inference for dynamic robotic tasks.

Cite

Text

Matl et al. "STReSSD: Sim-to-Real from Sound for Stochastic Dynamics." Conference on Robot Learning, 2020.

Markdown

[Matl et al. "STReSSD: Sim-to-Real from Sound for Stochastic Dynamics." Conference on Robot Learning, 2020.](https://mlanthology.org/corl/2020/matl2020corl-stressd/)

BibTeX

@inproceedings{matl2020corl-stressd,
  title     = {{STReSSD: Sim-to-Real from Sound for Stochastic Dynamics}},
  author    = {Matl, Carolyn and Narang, Yashraj and Fox, Dieter and Bajcsy, Ruzena and Ramos, Fabio},
  booktitle = {Conference on Robot Learning},
  year      = {2020},
  pages     = {935-958},
  volume    = {155},
  url       = {https://mlanthology.org/corl/2020/matl2020corl-stressd/}
}