Fusion of Infrared and Visible-Light Videos Using Motion-Compensated Temporal Sub-Band Decompositions
Abstract
The fusion of visible-light and infrared videos has applications in several areas, and is an active research topic. To this end, it is common to employ complex fusion methods that take into account spatial and/or temporal information in the videos. In this paper we propose a video fusion method that is based on a motion-compensated, two-band temporal sub-band decomposition. The alignment provided by the motion vectors, besides providing a reduction in the registration errors of the input images, allows the use of a simple fusion rule to the temporal sub-bands. The results indicate that the use of the proposed exploration of the temporal information alone is quite effective, and gives objective fusion quality results that compare favorably to more sophisticated methods based on complete spatiotemporal information.
Cite
Text
Gois et al. "Fusion of Infrared and Visible-Light Videos Using Motion-Compensated Temporal Sub-Band Decompositions." IEEE/CVF Winter Conference on Applications of Computer Vision, 2018. doi:10.1109/WACV.2018.00016Markdown
[Gois et al. "Fusion of Infrared and Visible-Light Videos Using Motion-Compensated Temporal Sub-Band Decompositions." IEEE/CVF Winter Conference on Applications of Computer Vision, 2018.](https://mlanthology.org/wacv/2018/gois2018wacv-fusion/) doi:10.1109/WACV.2018.00016BibTeX
@inproceedings{gois2018wacv-fusion,
title = {{Fusion of Infrared and Visible-Light Videos Using Motion-Compensated Temporal Sub-Band Decompositions}},
author = {Gois, Jonathan N. and da Silva, Eduardo A. B. and Pagliari, Carla L. and Perez, Marcelo M.},
booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision},
year = {2018},
pages = {93-101},
doi = {10.1109/WACV.2018.00016},
url = {https://mlanthology.org/wacv/2018/gois2018wacv-fusion/}
}