One-to-Many Network for Visually Pleasing Compression Artifacts Reduction
Abstract
We consider the compression artifacts reduction problem, where a compressed image is transformed into an artifact-free image. Recent approaches for this problem typically train a one-to-one mapping using a per-pixel L_2 loss between the outputs and the ground-truths. We point out that these approaches used to produce overly smooth results, and PSNR doesn't reflect their real performance. In this paper, we propose a one-to-many network, which measures output quality using a perceptual loss, a naturalness loss, and a JPEG loss. We also avoid grid-like artifacts during deconvolution using a "shift-and-average" strategy. Extensive experimental results demonstrate the dramatic visual improvement of our approach over the state of the arts.
Cite
Text
Guo and Chao. "One-to-Many Network for Visually Pleasing Compression Artifacts Reduction." Conference on Computer Vision and Pattern Recognition, 2017. doi:10.1109/CVPR.2017.517Markdown
[Guo and Chao. "One-to-Many Network for Visually Pleasing Compression Artifacts Reduction." Conference on Computer Vision and Pattern Recognition, 2017.](https://mlanthology.org/cvpr/2017/guo2017cvpr-onetomany/) doi:10.1109/CVPR.2017.517BibTeX
@inproceedings{guo2017cvpr-onetomany,
title = {{One-to-Many Network for Visually Pleasing Compression Artifacts Reduction}},
author = {Guo, Jun and Chao, Hongyang},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2017},
doi = {10.1109/CVPR.2017.517},
url = {https://mlanthology.org/cvpr/2017/guo2017cvpr-onetomany/}
}