Recovering and Simulating Pedestrians in the Wild

Abstract

Sensor simulation is a key component for testing the performance of self-driving vehicles and for data augmentation to better train perception systems. Typical approaches rely on artists to create both 3D assets and their animations to generate a new scenario. This, however, does not scale. In contrast, we propose to recover the shape and motion of pedestrians from sensor readings captured in the wild by a self-driving car driving around. Towards this goal, we formulate the problem as energy minimization in a deep structured model that exploits human shape priors, reprojection consistency with 2D poses extracted from images, and a ray-caster that encourages the reconstructed mesh to agree with the LiDAR readings. Importantly, we do not require any ground-truth 3D scans or 3D pose annotations. We then incorporate the reconstructed pedestrian assets bank in a realistic LiDAR simulation system by performing motion retargeting, and show that the simulated LiDAR data can be used to significantly reduce the amount of annotated real-world data required for visual perception tasks.

Cite

Text

Yang et al. "Recovering and Simulating Pedestrians in the Wild." Conference on Robot Learning, 2020.

Markdown

[Yang et al. "Recovering and Simulating Pedestrians in the Wild." Conference on Robot Learning, 2020.](https://mlanthology.org/corl/2020/yang2020corl-recovering/)

BibTeX

@inproceedings{yang2020corl-recovering,
  title     = {{Recovering and Simulating Pedestrians in the Wild}},
  author    = {Yang, Ze and Manivasagam, Sivabalan and Liang, Ming and Yang, Bin and Ma, Wei-Chiu and Urtasun, Raquel},
  booktitle = {Conference on Robot Learning},
  year      = {2020},
  pages     = {419-431},
  volume    = {155},
  url       = {https://mlanthology.org/corl/2020/yang2020corl-recovering/}
}