Dynamic LiDAR Re-Simulation Using Compositional Neural Fields

Abstract

We introduce DyNFL a novel neural field-based approach for high-fidelity re-simulation of LiDAR scans in dynamic driving scenes. DyNFL processes LiDAR measurements from dynamic environments accompanied by bounding boxes of moving objects to construct an editable neural field. This field comprising separately reconstructed static background and dynamic objects allows users to modify viewpoints adjust object positions and seamlessly add or remove objects in the re-simulated scene. A key innovation of our method is the neural field composition technique which effectively integrates reconstructed neural assets from various scenes through a ray drop test accounting for occlusions and transparent surfaces. Our evaluation with both synthetic and real-world environments demonstrates that DyNFL substantially improves dynamic scene LiDAR simulation offering a combination of physical fidelity and flexible editing capabilities. Project page: https://shengyuh.github.io/dynfl

Cite

Text

Wu et al. "Dynamic LiDAR Re-Simulation Using Compositional Neural Fields." Conference on Computer Vision and Pattern Recognition, 2024. doi:10.1109/CVPR52733.2024.01889

Markdown

[Wu et al. "Dynamic LiDAR Re-Simulation Using Compositional Neural Fields." Conference on Computer Vision and Pattern Recognition, 2024.](https://mlanthology.org/cvpr/2024/wu2024cvpr-dynamic/) doi:10.1109/CVPR52733.2024.01889

BibTeX

@inproceedings{wu2024cvpr-dynamic,
  title     = {{Dynamic LiDAR Re-Simulation Using Compositional Neural Fields}},
  author    = {Wu, Hanfeng and Zuo, Xingxing and Leutenegger, Stefan and Litany, Or and Schindler, Konrad and Huang, Shengyu},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2024},
  pages     = {19988-19998},
  doi       = {10.1109/CVPR52733.2024.01889},
  url       = {https://mlanthology.org/cvpr/2024/wu2024cvpr-dynamic/}
}