Re-Thinking LiDAR-Stereo Fusion Frameworks (Student Abstract)

Abstract

In this paper, we present a 2-step framework for high-precision dense depth perception from stereo RGB images and sparse LiDAR input. In the first step, we train a deep neural network to predict dense depth map from the left image and sparse LiDAR data, in a novel self-supervised manner. Then in the second step, we compute a disparity map from the predicted depths, and refining the disparity map by making sure that for every pixel in the left, its match in the right image, according to the final disparity, is the local optimum.

Cite

Text

Jin and Duggirala. "Re-Thinking LiDAR-Stereo Fusion Frameworks (Student Abstract)." AAAI Conference on Artificial Intelligence, 2020. doi:10.1609/AAAI.V34I10.7185

Markdown

[Jin and Duggirala. "Re-Thinking LiDAR-Stereo Fusion Frameworks (Student Abstract)." AAAI Conference on Artificial Intelligence, 2020.](https://mlanthology.org/aaai/2020/jin2020aaai-re/) doi:10.1609/AAAI.V34I10.7185

BibTeX

@inproceedings{jin2020aaai-re,
  title     = {{Re-Thinking LiDAR-Stereo Fusion Frameworks (Student Abstract)}},
  author    = {Jin, Qilin and Duggirala, Parasara Sridhar},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2020},
  pages     = {13827-13828},
  doi       = {10.1609/AAAI.V34I10.7185},
  url       = {https://mlanthology.org/aaai/2020/jin2020aaai-re/}
}