The Unreasonable Effectiveness of Texture Transfer for Single Image Super-Resolution

Abstract

While implicit generative models such as GANs have shown impressive results in high quality image reconstruction and manipulation using a combination of various losses, we consider a simpler approach leading to surprisingly strong results. We show that texture loss [1] alone allows the generation of perceptually high quality images. We provide a better understanding of texture constraining mechanism and develop a novel semantically guided texture constraining method for further improvement. Using a recently developed perceptual metric employing “deep features” and termed LPIPS [2], the method obtains state-of-the-art results. Moreover, we show that a texture representation of those deep features better capture the perceptual quality of an image than the original deep features. Using texture information, off-the-shelf deep classification networks (without training) perform as well as the best performing (tuned and calibrated) LPIPS metrics.

Cite

Text

Gondal et al. "The Unreasonable Effectiveness of Texture Transfer for Single Image Super-Resolution." European Conference on Computer Vision Workshops, 2018. doi:10.1007/978-3-030-11021-5_6

Markdown

[Gondal et al. "The Unreasonable Effectiveness of Texture Transfer for Single Image Super-Resolution." European Conference on Computer Vision Workshops, 2018.](https://mlanthology.org/eccvw/2018/gondal2018eccvw-unreasonable/) doi:10.1007/978-3-030-11021-5_6

BibTeX

@inproceedings{gondal2018eccvw-unreasonable,
  title     = {{The Unreasonable Effectiveness of Texture Transfer for Single Image Super-Resolution}},
  author    = {Gondal, Muhammad Waleed and Schölkopf, Bernhard and Hirsch, Michael},
  booktitle = {European Conference on Computer Vision Workshops},
  year      = {2018},
  pages     = {80-97},
  doi       = {10.1007/978-3-030-11021-5_6},
  url       = {https://mlanthology.org/eccvw/2018/gondal2018eccvw-unreasonable/}
}