PanNet: A Deep Network Architecture for Pan-Sharpening

Abstract

We propose a deep network architecture for the pan-sharpening problem called PanNet. We incorporate domain-specific knowledge to design our PanNet architecture by focusing on the two aims of the pan-sharpening problem: spectral and spatial preservation. For spectral preservation, we add up-sampled multispectral images to the network output, which directly propagates the spectral information to the reconstructed image. To preserve spatial structure, we train our network parameters in the high-pass filtering domain rather than the image domain. We show that the trained network generalizes well to images from different satellites without needing retraining. Experiments show significant improvement over state-of-the-art methods visually and in terms of standard quality metrics.

Cite

Text

Yang et al. "PanNet: A Deep Network Architecture for Pan-Sharpening." International Conference on Computer Vision, 2017. doi:10.1109/ICCV.2017.193

Markdown

[Yang et al. "PanNet: A Deep Network Architecture for Pan-Sharpening." International Conference on Computer Vision, 2017.](https://mlanthology.org/iccv/2017/yang2017iccv-pannet/) doi:10.1109/ICCV.2017.193

BibTeX

@inproceedings{yang2017iccv-pannet,
  title     = {{PanNet: A Deep Network Architecture for Pan-Sharpening}},
  author    = {Yang, Junfeng and Fu, Xueyang and Hu, Yuwen and Huang, Yue and Ding, Xinghao and Paisley, John},
  booktitle = {International Conference on Computer Vision},
  year      = {2017},
  doi       = {10.1109/ICCV.2017.193},
  url       = {https://mlanthology.org/iccv/2017/yang2017iccv-pannet/}
}