Weakly Supervised Fusion of Multiple Overhead Images

Abstract

This work addresses the problem of combining noisy overhead images to make a single high-quality image of a region. Existing fusion methods rely on supervised learning, which requires image quality annotations, or ad hoc criteria, which do not generalize well. We formulate a weakly supervised method, which learns to predict image quality at the pixel-level by optimizing for semantic segmentation. This means our method only requires semantic segmentation labels, not explicit artifact annotations in the input images. We evaluate our method under varying levels of occlusions and clouds. Experimental results show that our method is significantly better than a baseline fusion approach and nearly as good as the ideal case, a single noise-free image.

Cite

Text

Rafique et al. "Weakly Supervised Fusion of Multiple Overhead Images." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019. doi:10.1109/CVPRW.2019.00189

Markdown

[Rafique et al. "Weakly Supervised Fusion of Multiple Overhead Images." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019.](https://mlanthology.org/cvprw/2019/rafique2019cvprw-weakly/) doi:10.1109/CVPRW.2019.00189

BibTeX

@inproceedings{rafique2019cvprw-weakly,
  title     = {{Weakly Supervised Fusion of Multiple Overhead Images}},
  author    = {Rafique, Muhammad Usman and Blanton, Hunter and Jacobs, Nathan},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2019},
  pages     = {1479-1486},
  doi       = {10.1109/CVPRW.2019.00189},
  url       = {https://mlanthology.org/cvprw/2019/rafique2019cvprw-weakly/}
}