A Multi-Sensor Fusion Framework in 3-D
Abstract
The majority of existing image fusion techniques operate in the 2-d image domain which perform well for imagery of planar regions but fails in presence of any 3-d relief and provides inaccurate alignment of imagery from different sensors. A framework for multi-sensor image fusion in 3-d is proposed in this paper. The imagery from different sensors, specifically EO and IR, are fused in a common 3-d reference coordinate frame. A dense probabilistic and volumetric 3-d model is reconstructed from each of the sensors. The imagery is registered by aligning the 3-d models as the underlying 3-d structure in the images is the true invariant information. The image intensities are back-projected onto a 3-d model and every discretized location (voxel) of the 3-d model stores an array of intensities from different modalities. This 3-d model is forward-projected to produce a fused image of EO and IR from any viewpoint.
Cite
Text
Jain et al. "A Multi-Sensor Fusion Framework in 3-D." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2013. doi:10.1109/CVPRW.2013.54Markdown
[Jain et al. "A Multi-Sensor Fusion Framework in 3-D." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2013.](https://mlanthology.org/cvprw/2013/jain2013cvprw-multisensor/) doi:10.1109/CVPRW.2013.54BibTeX
@inproceedings{jain2013cvprw-multisensor,
title = {{A Multi-Sensor Fusion Framework in 3-D}},
author = {Jain, Vishal and Miller, Andrew and Mundy, Joseph L.},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2013},
pages = {314-319},
doi = {10.1109/CVPRW.2013.54},
url = {https://mlanthology.org/cvprw/2013/jain2013cvprw-multisensor/}
}