RangeLDM: Fast Realistic LiDAR Point Cloud Generation
Abstract
Autonomous driving demands high-quality LiDAR data, yet the cost of physical LiDAR sensors presents a significant scaling-up challenge. While recent efforts have explored deep generative models to address this issue, they often consume substantial computational resources with slow generation speeds while suffering from a lack of realism. To address these limitations, we introduce RangeLDM, a novel approach for rapidly generating high-quality range-view LiDAR point clouds via latent diffusion models. We achieve this by correcting range-view data distribution for accurate projection from point clouds to range images via Hough voting, which has a critical impact on generative learning. We then compress the range images into a latent space with a variational autoencoder, and leverage a diffusion model to enhance expressivity. Additionally, we instruct the model to preserve 3D structural fidelity by devising a range-guided discriminator. Experimental results on KITTI-360 and nuScenes datasets demonstrate both the robust expressiveness and fast speed of our LiDAR point cloud generation.
Cite
Text
Hu et al. "RangeLDM: Fast Realistic LiDAR Point Cloud Generation." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-72784-9_7Markdown
[Hu et al. "RangeLDM: Fast Realistic LiDAR Point Cloud Generation." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/hu2024eccv-rangeldm/) doi:10.1007/978-3-031-72784-9_7BibTeX
@inproceedings{hu2024eccv-rangeldm,
title = {{RangeLDM: Fast Realistic LiDAR Point Cloud Generation}},
author = {Hu, Qianjiang and Zhang, Zhimin and Hu, Wei},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2024},
doi = {10.1007/978-3-031-72784-9_7},
url = {https://mlanthology.org/eccv/2024/hu2024eccv-rangeldm/}
}