LiDAR Inpainting from a Single Image

Abstract

Range scans produced by LiDAR (Light Detection and Ranging) intrinsically suffer from ¿shadows¿ of missing data cast on surfaces by occluding objects. In this paper, we show how a single additional image of the scene from a different perspective can be used to automatically fill in high-detail structure in these shadow regions. The technique is inspired by inpainting algorithms from the computer vision literature, intelligently filling in missing information by exploiting the observation that similar image regions often correspond to similar 3D geometry. We first create an example database of image patch/3D geometry pairs from the non-occluded parts of the LiDAR scan, describing each uniform-scale region in 3D with a rotationally invariant image descriptor. We then iteratively select the best location on the current shadow boundary based on the amount of known supporting geometry, filling in blocks of 3D geometry using the best match from the example database and a local 3D registration. We demonstrate that our algorithm can generate realistic, high-detail new geometry in several synthetic and real-world examples.

Cite

Text

Becker et al. "LiDAR Inpainting from a Single Image." IEEE/CVF International Conference on Computer Vision Workshops, 2009. doi:10.1109/ICCVW.2009.5457441

Markdown

[Becker et al. "LiDAR Inpainting from a Single Image." IEEE/CVF International Conference on Computer Vision Workshops, 2009.](https://mlanthology.org/iccvw/2009/becker2009iccvw-lidar/) doi:10.1109/ICCVW.2009.5457441

BibTeX

@inproceedings{becker2009iccvw-lidar,
  title     = {{LiDAR Inpainting from a Single Image}},
  author    = {Becker, Jacob and Stewart, Charles V. and Radke, Richard J.},
  booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
  year      = {2009},
  pages     = {1441-1448},
  doi       = {10.1109/ICCVW.2009.5457441},
  url       = {https://mlanthology.org/iccvw/2009/becker2009iccvw-lidar/}
}