Guided Deep Decoder: Unsupervised Image Pair Fusion
Abstract
The fusion of input and guidance images that have a tradeoff in their information (e.g., hyperspectral and RGB image fusion or pansharpening) can be interpreted as one general problem. However, previous studies applied a task-specific handcrafted prior and did not address the problems with a unified approach. To address this limitation, in this study, we propose a guided deep decoder network as a general prior. The proposed network is composed of an encoder-decoder network that exploits multi-scale features of a guidance image and a deep decoder network that generates an output image. The two networks are connected by feature refinement units to embed the multi-scale features of the guidance image into the deep decoder network. The proposed network allows the network parameters to be optimized in an unsupervised way without training data. Our results show that the proposed network can achieve state-of-the-art performance in various image fusion problems.
Cite
Text
Uezato et al. "Guided Deep Decoder: Unsupervised Image Pair Fusion." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58539-6_6Markdown
[Uezato et al. "Guided Deep Decoder: Unsupervised Image Pair Fusion." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/uezato2020eccv-guided/) doi:10.1007/978-3-030-58539-6_6BibTeX
@inproceedings{uezato2020eccv-guided,
title = {{Guided Deep Decoder: Unsupervised Image Pair Fusion}},
author = {Uezato, Tatsumi and Hong, Danfeng and Yokoya, Naoto and He, Wei},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2020},
doi = {10.1007/978-3-030-58539-6_6},
url = {https://mlanthology.org/eccv/2020/uezato2020eccv-guided/}
}