A Compression Objective and a Cycle Loss for Neural Image Compression

Abstract

In this manuscript we propose two loss terms for neural image compression: a compression objective and a cycle loss. These terms are applied on the encoder output of an autoencoder and are used in combination with reconstruction losses. The compression objective encourages sparsity and low entropy in the activations. The cycle loss term represents the distortion between encoder outputs computed from the original image and from the reconstructed image (code-domain distortion). We train different autoencoders by using the compression objective in combination with different losses: a) MSE, b) MSE and MS-SSIM, c) MSE, MS-SSIM and cycle loss. We observe that images encoded by these differently-trained autoencoders fall into different points of the perception-distortion curve (while having similar bit-rates). In particular, MSE-only training favors low image-domain distortion, whereas cycle loss training favors high perceptual quality.

Cite

Text

Aytekin et al. "A Compression Objective and a Cycle Loss for Neural Image Compression." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019.

Markdown

[Aytekin et al. "A Compression Objective and a Cycle Loss for Neural Image Compression." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019.](https://mlanthology.org/cvprw/2019/aytekin2019cvprw-compression/)

BibTeX

@inproceedings{aytekin2019cvprw-compression,
  title     = {{A Compression Objective and a Cycle Loss for Neural Image Compression}},
  author    = {Aytekin, Çaglar and Cricri, Francesco and Hallapuro, Antti and Lainema, Jani and Aksu, Emre and Hannuksela, Miska M.},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2019},
  url       = {https://mlanthology.org/cvprw/2019/aytekin2019cvprw-compression/}
}