PO-ELIC: Perception-Oriented Efficient Learned Image Coding

Abstract

In the past years, learned image compression (LIC) has achieved remarkable performance. The recent LIC methods outperform VVC in both PSNR and MS-SSIM. However, the low bit-rate reconstructions of LIC suffer from artifacts such as blurring, color drifting and texture missing. Moreover, those varied artifacts make image quality metrics correlate badly with human perceptual quality. In this paper, we propose PO-ELIC, i.e., Perception-Oriented Efficient Learned Image Coding. To be specific, we adapt ELIC, one of the state-of-the-art LIC models, with adversarial training techniques. We apply a mixture of losses including hinge-form adversarial loss, Charbonnier loss, and style loss, to finetune the model towards better perceptual quality. Experimental results demonstrate that our method achieves comparable perceptual quality with HiFiC with much lower bitrate.

Cite

Text

He et al. "PO-ELIC: Perception-Oriented Efficient Learned Image Coding." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2022. doi:10.1109/CVPRW56347.2022.00187

Markdown

[He et al. "PO-ELIC: Perception-Oriented Efficient Learned Image Coding." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2022.](https://mlanthology.org/cvprw/2022/he2022cvprw-poelic/) doi:10.1109/CVPRW56347.2022.00187

BibTeX

@inproceedings{he2022cvprw-poelic,
  title     = {{PO-ELIC: Perception-Oriented Efficient Learned Image Coding}},
  author    = {He, Dailan and Yang, Ziming and Yu, Hongjiu and Xu, Tongda and Luo, Jixiang and Chen, Yuan and Gao, Chenjian and Shi, Xinjie and Qin, Hongwei and Wang, Yan},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2022},
  pages     = {1763-1768},
  doi       = {10.1109/CVPRW56347.2022.00187},
  url       = {https://mlanthology.org/cvprw/2022/he2022cvprw-poelic/}
}