Frame Rate Fusion and Upsampling of EO/LIDAR Data for Multiple Platforms
Abstract
We propose a method for fusing a LIDAR point cloud to camera data in real time, which will also backfill the myriad of data holes LIDAR creates. This is done in a way that also leverages the images features to weight how point clouds are filled. Multithreaded programing and GP-GPU methods allow us to obtain 10 fps with a Velodyne 64E LIDAR completely fused in 360 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">o</sup> using a Ladybug panoramic camera. The method also generalizes to other kinds of point clouds such as those obtained by aerial vehicles. The primary advantage of our approach is it combines 360 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">o</sup> fusion with upsampling in real time without mode smoothing.
Cite
Text
Mundhenk et al. "Frame Rate Fusion and Upsampling of EO/LIDAR Data for Multiple Platforms." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2014. doi:10.1109/CVPRW.2014.117Markdown
[Mundhenk et al. "Frame Rate Fusion and Upsampling of EO/LIDAR Data for Multiple Platforms." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2014.](https://mlanthology.org/cvprw/2014/mundhenk2014cvprw-frame/) doi:10.1109/CVPRW.2014.117BibTeX
@inproceedings{mundhenk2014cvprw-frame,
title = {{Frame Rate Fusion and Upsampling of EO/LIDAR Data for Multiple Platforms}},
author = {Mundhenk, T. Nathan and Kim, Kyungnam and Owechko, Yuri},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2014},
pages = {762-769},
doi = {10.1109/CVPRW.2014.117},
url = {https://mlanthology.org/cvprw/2014/mundhenk2014cvprw-frame/}
}