DIDFuse: Deep Image Decomposition for Infrared and Visible Image Fusion
Abstract
Infrared and visible image fusion, a hot topic in the field of image processing, aims at obtaining fused images keeping the advantages of source images. This paper proposes a novel auto-encoder (AE) based fusion network. The core idea is that the encoder decomposes an image into background and detail feature maps with low- and high-frequency information, respectively, and that the decoder recovers the original image. To this end, the loss function makes the background/detail feature maps of source images similar/dissimilar. In the test phase, background and detail feature maps are respectively merged via a fusion module, and the fused image is recovered by the decoder. Qualitative and quantitative results illustrate that our method can generate fusion images containing highlighted targets and abundant detail texture information with strong reproducibility and meanwhile surpass state-of-the-art (SOTA) approaches.
Cite
Text
Zhao et al. "DIDFuse: Deep Image Decomposition for Infrared and Visible Image Fusion." International Joint Conference on Artificial Intelligence, 2020. doi:10.24963/IJCAI.2020/135Markdown
[Zhao et al. "DIDFuse: Deep Image Decomposition for Infrared and Visible Image Fusion." International Joint Conference on Artificial Intelligence, 2020.](https://mlanthology.org/ijcai/2020/zhao2020ijcai-didfuse/) doi:10.24963/IJCAI.2020/135BibTeX
@inproceedings{zhao2020ijcai-didfuse,
title = {{DIDFuse: Deep Image Decomposition for Infrared and Visible Image Fusion}},
author = {Zhao, Zixiang and Xu, Shuang and Zhang, Chunxia and Liu, Junmin and Zhang, Jiangshe and Li, Pengfei},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2020},
pages = {970-976},
doi = {10.24963/IJCAI.2020/135},
url = {https://mlanthology.org/ijcai/2020/zhao2020ijcai-didfuse/}
}