Learning to Generate Realistic LiDAR Point Clouds

Abstract

We present LiDARGen, a novel, effective, and controllable generative model that produces realistic LiDAR point cloud sensory readings. Our method leverages the powerful score-matching energy-based model and formulates the point cloud generation process as a stochastic denoising process in the equirectangular view. This model allows us to sample diverse and high-quality point cloud samples with guaranteed physical feasibility and controllability. We validate the effectiveness of our method on the challenging KITTI-360 and NuScenes datasets. The quantitative and qualitative results show that our approach produces more realistic samples than other generative models. Furthermore, LiDARGen can sample point clouds conditioned on inputs without retraining. We demonstrate that our proposed generative model could be directly used to densify LiDAR point clouds.

Cite

Text

Zyrianov et al. "Learning to Generate Realistic LiDAR Point Clouds." Proceedings of the European Conference on Computer Vision (ECCV), 2022. doi:10.1007/978-3-031-20050-2_2

Markdown

[Zyrianov et al. "Learning to Generate Realistic LiDAR Point Clouds." Proceedings of the European Conference on Computer Vision (ECCV), 2022.](https://mlanthology.org/eccv/2022/zyrianov2022eccv-learning/) doi:10.1007/978-3-031-20050-2_2

BibTeX

@inproceedings{zyrianov2022eccv-learning,
  title     = {{Learning to Generate Realistic LiDAR Point Clouds}},
  author    = {Zyrianov, Vlas and Zhu, Xiyue and Wang, Shenlong},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2022},
  doi       = {10.1007/978-3-031-20050-2_2},
  url       = {https://mlanthology.org/eccv/2022/zyrianov2022eccv-learning/}
}