Adaptive Bitrate Quantization Scheme Without Codebook for Learned Image Compression
Abstract
We propose a generic approach to quantization without codebook in learned image compression called one-hot max (OHM, Ω) quantization. It reorganizes the feature space resulting in an additional dimension, along which vector quantization yields one-hot vectors by comparing activations. Furthermore, we show how to integrate Ω quantization into a compression system with bitrate adaptation, i.e., full control over bitrate during inference. We perform experiments on both MNIST and Kodak and report on rate-distortion trade-offs comparing with the integer rounding reference. For low bitrates (< 0.4 bpp), our proposed quantizer yields better performance while exhibiting also other advantageous training and inference properties. Code is available at https://github.com/ifnspaml/OHMQ.
Cite
Text
Löhdefink et al. "Adaptive Bitrate Quantization Scheme Without Codebook for Learned Image Compression." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2022. doi:10.1109/CVPRW56347.2022.00181Markdown
[Löhdefink et al. "Adaptive Bitrate Quantization Scheme Without Codebook for Learned Image Compression." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2022.](https://mlanthology.org/cvprw/2022/lohdefink2022cvprw-adaptive/) doi:10.1109/CVPRW56347.2022.00181BibTeX
@inproceedings{lohdefink2022cvprw-adaptive,
title = {{Adaptive Bitrate Quantization Scheme Without Codebook for Learned Image Compression}},
author = {Löhdefink, Jonas and Sitzmann, Jonas and Bär, Andreas and Fingscheidt, Tim},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2022},
pages = {1731-1736},
doi = {10.1109/CVPRW56347.2022.00181},
url = {https://mlanthology.org/cvprw/2022/lohdefink2022cvprw-adaptive/}
}