An Image Compression Framework with Learning-Based Filter

Abstract

In this paper, a coding framework VIP-ICT-Codec is introduced. Our method is based on the VTM (Versatile Video Coding Test Model). First, we propose a color space conversion from RGB to YUV domain by using a PCA-like operation. A method for the PCA mean calculation is proposed to de-correlate the residual components of YUV channels. Besides, the correlation of UV components is compensated considering that they share the same coding tree in VVC. We also learn a residual mapping to alleviate the over-filtered and under-filtered problem of specific images. Finally, we regard the rate control as an unconstraint Lagrangian problem to reach the target bpp. The results show that we achieve 32.625dB at the validation phase.

Cite

Text

Sun et al. "An Image Compression Framework with Learning-Based Filter." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020. doi:10.1109/CVPRW50498.2020.00084

Markdown

[Sun et al. "An Image Compression Framework with Learning-Based Filter." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020.](https://mlanthology.org/cvprw/2020/sun2020cvprw-image/) doi:10.1109/CVPRW50498.2020.00084

BibTeX

@inproceedings{sun2020cvprw-image,
  title     = {{An Image Compression Framework with Learning-Based Filter}},
  author    = {Sun, Heming and Liu, Chao and Katto, Jiro and Fan, Yibo},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2020},
  pages     = {602-606},
  doi       = {10.1109/CVPRW50498.2020.00084},
  url       = {https://mlanthology.org/cvprw/2020/sun2020cvprw-image/}
}