Modeling Deformable Gradient Compositions for Single-Image Super-Resolution

Abstract

We propose a single-image super-resolution method based on the gradient reconstruction. To predict the gradient field, we collect a dictionary of gradient patterns from an external set of images. We observe that there are patches representing singular primitive structures (e.g. a single edge), and non-singular ones (e.g. a triplet of edges). Based on the fact that singular primitive patches are more invariant to the scale change (i.e. have less ambiguity across different scales), we represent the non-singular primitives as compositions of singular ones, each of which is allowed some deformation. Both the input patches and dictionary elements are decomposed to contain only singular primitives. The compositional aspect of the model makes the gradient field more reliable. The deformable aspect makes the dictionary more expressive. As shown in our experimental results, the proposed method outperforms the state-of-the-art methods.

Cite

Text

Zhu et al. "Modeling Deformable Gradient Compositions for Single-Image Super-Resolution." Conference on Computer Vision and Pattern Recognition, 2015. doi:10.1109/CVPR.2015.7299180

Markdown

[Zhu et al. "Modeling Deformable Gradient Compositions for Single-Image Super-Resolution." Conference on Computer Vision and Pattern Recognition, 2015.](https://mlanthology.org/cvpr/2015/zhu2015cvpr-modeling/) doi:10.1109/CVPR.2015.7299180

BibTeX

@inproceedings{zhu2015cvpr-modeling,
  title     = {{Modeling Deformable Gradient Compositions for Single-Image Super-Resolution}},
  author    = {Zhu, Yu and Zhang, Yanning and Bonev, Boyan and Yuille, Alan L.},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2015},
  doi       = {10.1109/CVPR.2015.7299180},
  url       = {https://mlanthology.org/cvpr/2015/zhu2015cvpr-modeling/}
}