Learning Parametric Distributions for Image Super-Resolution: Where Patch Matching Meets Sparse Coding

Abstract

Existing approaches toward Image super-resolution (SR) is often either data-driven (e.g., based on internet-scale matching and web image retrieval) or model-based (e.g., formulated as an Maximizing a Posterior estimation problem). The former is conceptually simple yet heuristic; while the latter is constrained by the fundamental limit of frequency aliasing. In this paper, we propose to develop a hybrid approach toward SR by combining those two lines of ideas. More specifically, the parameters underlying sparse distributions of desirable HR image patches are learned from a pair of LR image and retrieved HR images. Our hybrid approach can be interpreted as the first attempt of reconciling the difference between parametric and nonparametric models for low-level vision tasks. Experimental results show that the proposed hybrid SR method performs much better than existing state-of-the-art methods in terms of both subjective and objective image qualities.

Cite

Text

Li et al. "Learning Parametric Distributions for Image Super-Resolution: Where Patch Matching Meets Sparse Coding." International Conference on Computer Vision, 2015. doi:10.1109/ICCV.2015.59

Markdown

[Li et al. "Learning Parametric Distributions for Image Super-Resolution: Where Patch Matching Meets Sparse Coding." International Conference on Computer Vision, 2015.](https://mlanthology.org/iccv/2015/li2015iccv-learning/) doi:10.1109/ICCV.2015.59

BibTeX

@inproceedings{li2015iccv-learning,
  title     = {{Learning Parametric Distributions for Image Super-Resolution: Where Patch Matching Meets Sparse Coding}},
  author    = {Li, Yongbo and Dong, Weisheng and Shi, Guangming and Xie, Xuemei},
  booktitle = {International Conference on Computer Vision},
  year      = {2015},
  doi       = {10.1109/ICCV.2015.59},
  url       = {https://mlanthology.org/iccv/2015/li2015iccv-learning/}
}