Deep Image Compression via End-to-End Learning
Abstract
We present a lossy image compression method based on deep convolutional neural networks (CNNs), which outperforms the existing BPG, WebP, JPEG2000 and JPEG as measured via multi-scale structural similarity (MS-SSIM), at the same bit rate. Currently, most of the CNNs based approaches train the network using a l-2 loss between the reconstructions and the ground-truths in the pixel domain, which leads to over-smoothing results and visual quality degradation especially at a very low bit rate. Therefore, we improve the subjective quality with the combination of a perception loss and an adversarial loss additionally. To achieve better rate-distortion optimization (RDO), we also introduce an easy-to-hard transfer learning when adding quantization error and rate constraint. Finally, we evaluate our method on public Kodak and the Test Dataset P/M released by the Computer Vision Lab of ETH Zurich, resulting in averaged 7.81% and 19.1% BD-rate reduction over BPG, respectively.
Cite
Text
Liu et al. "Deep Image Compression via End-to-End Learning." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2018.Markdown
[Liu et al. "Deep Image Compression via End-to-End Learning." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2018.](https://mlanthology.org/cvprw/2018/liu2018cvprw-deep/)BibTeX
@inproceedings{liu2018cvprw-deep,
title = {{Deep Image Compression via End-to-End Learning}},
author = {Liu, Haojie and Chen, Tong and Shen, Qiu and Yue, Tao and Ma, Zhan},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2018},
pages = {2575-2578},
url = {https://mlanthology.org/cvprw/2018/liu2018cvprw-deep/}
}