Building Dual-Domain Representations for Compression Artifacts Reduction
Abstract
We propose a highly accurate approach to remove artifacts of JPEG-compressed images. Our approach jointly learns a very deep convolutional network in both DCT and pixel domains. The dual-domain representation can make full use of DCT-domain prior knowledge of JPEG compression, which is usually lacking in traditional network-based approaches. At the same time, it can also benefit from the prowess and the efficiency of the deep feed-forward architecture, in comparison to capacity-limited sparse-coding-based approaches. Two simple strategies, i.e., Adam and residual learning, are adopted to train the very deep network and later proved to be a success. Extensive experiments demonstrate the large improvements of our approach over the state of the arts.
Cite
Text
Guo and Chao. "Building Dual-Domain Representations for Compression Artifacts Reduction." European Conference on Computer Vision, 2016. doi:10.1007/978-3-319-46448-0_38Markdown
[Guo and Chao. "Building Dual-Domain Representations for Compression Artifacts Reduction." European Conference on Computer Vision, 2016.](https://mlanthology.org/eccv/2016/guo2016eccv-building/) doi:10.1007/978-3-319-46448-0_38BibTeX
@inproceedings{guo2016eccv-building,
title = {{Building Dual-Domain Representations for Compression Artifacts Reduction}},
author = {Guo, Jun and Chao, Hongyang},
booktitle = {European Conference on Computer Vision},
year = {2016},
pages = {628-644},
doi = {10.1007/978-3-319-46448-0_38},
url = {https://mlanthology.org/eccv/2016/guo2016eccv-building/}
}