Neural Image Compression with Quantization Rectifier

Abstract

Neural image compression has been shown to outperform traditional image codecs in terms of rate-distortion performance. However, quantization introduces errors in the compression process, which can degrade the quality of the compressed image. Existing approaches address the train-test mismatch problem incurred during quantization, the random impact of quantization on the expressiveness of image features is still unsolved. This paper presents a novel quantization rectifier (QR) method for image compression that leverages image feature correlation to mitigate the impact of quantization. Our method designs a neural network architecture that predicts unquantized features from the quantized ones, preserving feature expressiveness for better image reconstruction quality. We develop a soft-to-predictive training technique to integrate QR into existing neural image codecs. In evaluation, we integrate QR into state-of-the-art neural image codecs and compare enhanced models and baselines on the widely-used Kodak benchmark. The results show consistent coding efficiency improvement by QR with a negligible increase in the running time.

Cite

Text

Luo and Chen. "Neural Image Compression with Quantization Rectifier." ICML 2023 Workshops: NCW, 2023.

Markdown

[Luo and Chen. "Neural Image Compression with Quantization Rectifier." ICML 2023 Workshops: NCW, 2023.](https://mlanthology.org/icmlw/2023/luo2023icmlw-neural/)

BibTeX

@inproceedings{luo2023icmlw-neural,
  title     = {{Neural Image Compression with Quantization Rectifier}},
  author    = {Luo, Wei and Chen, Bo},
  booktitle = {ICML 2023 Workshops: NCW},
  year      = {2023},
  url       = {https://mlanthology.org/icmlw/2023/luo2023icmlw-neural/}
}