Image Super-Resolution with Deep Dictionary

Abstract

Since the first success of Dong et al., the deep-learning-based approach has become dominant in the field of single-image super-resolution. This replaces all the handcrafted image processing steps of traditional sparse-coding-based methods with a deep neural network. In contrast to sparse-coding-based methods, which explicitly create high/low-resolution dictionaries, the dictionaries in deep-learning-based methods are implicitly acquired as a nonlinear combination of multiple convolutions. One disadvantage of deep-learning-based methods is that their performance is degraded for images created differently from the training dataset (out-of-domain images). We propose an end-to-end super-resolution network with a deep dictionary (SRDD), where a high-resolution dictionary is explicitly learned without sacrificing the advantages of deep learning. Extensive experiments show that explicit learning of high-resolution dictionary makes the network more robust for out-of-domain test images while maintaining the performance of the in-domain test images.

Cite

Text

Maeda. "Image Super-Resolution with Deep Dictionary." Proceedings of the European Conference on Computer Vision (ECCV), 2022. doi:10.1007/978-3-031-19800-7_27

Markdown

[Maeda. "Image Super-Resolution with Deep Dictionary." Proceedings of the European Conference on Computer Vision (ECCV), 2022.](https://mlanthology.org/eccv/2022/maeda2022eccv-image/) doi:10.1007/978-3-031-19800-7_27

BibTeX

@inproceedings{maeda2022eccv-image,
  title     = {{Image Super-Resolution with Deep Dictionary}},
  author    = {Maeda, Shunta},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2022},
  doi       = {10.1007/978-3-031-19800-7_27},
  url       = {https://mlanthology.org/eccv/2022/maeda2022eccv-image/}
}