Dual-Camera Super-Resolution with Aligned Attention Modules

Abstract

We present a novel approach to reference-based super-resolution (RefSR) with the focus on dual-camera super-resolution (DCSR), which utilizes reference images for high-quality and high-fidelity results. Our proposed method generalizes the standard patch-based feature matching with spatial alignment operations. We further explore the dual-camera super-resolution that is one promising application of RefSR, and build a dataset that consists of 146 image pairs from the main and telephoto cameras in a smartphone. To bridge the domain gaps between real-world images and the training images, we propose a self-supervised domain adaptation strategy for real-world images. Extensive experiments on our dataset and a public benchmark demonstrate clear improvement achieved by our method over state of the art in both quantitative evaluation and visual comparisons.

Cite

Text

Wang et al. "Dual-Camera Super-Resolution with Aligned Attention Modules." International Conference on Computer Vision, 2021. doi:10.1109/ICCV48922.2021.00201

Markdown

[Wang et al. "Dual-Camera Super-Resolution with Aligned Attention Modules." International Conference on Computer Vision, 2021.](https://mlanthology.org/iccv/2021/wang2021iccv-dualcamera/) doi:10.1109/ICCV48922.2021.00201

BibTeX

@inproceedings{wang2021iccv-dualcamera,
  title     = {{Dual-Camera Super-Resolution with Aligned Attention Modules}},
  author    = {Wang, Tengfei and Xie, Jiaxin and Sun, Wenxiu and Yan, Qiong and Chen, Qifeng},
  booktitle = {International Conference on Computer Vision},
  year      = {2021},
  pages     = {2001-2010},
  doi       = {10.1109/ICCV48922.2021.00201},
  url       = {https://mlanthology.org/iccv/2021/wang2021iccv-dualcamera/}
}