HiLLoC: Lossless Image Compression with Hierarchical Latent Variable Models

Abstract

We make the following striking observation: fully convolutional VAE models trained on 32x32 ImageNet can generalize well, not just to 64x64 but also to far larger photographs, with no changes to the model. We use this property, applying fully convolutional models to lossless compression, demonstrating a method to scale the VAE-based 'Bits-Back with ANS' algorithm for lossless compression to large color photographs, and achieving state of the art for compression of full size ImageNet images. We release Craystack, an open source library for convenient prototyping of lossless compression using probabilistic models, along with full implementations of all of our compression results.

Cite

Text

Townsend et al. "HiLLoC: Lossless Image Compression with Hierarchical Latent Variable Models." International Conference on Learning Representations, 2020.

Markdown

[Townsend et al. "HiLLoC: Lossless Image Compression with Hierarchical Latent Variable Models." International Conference on Learning Representations, 2020.](https://mlanthology.org/iclr/2020/townsend2020iclr-hilloc/)

BibTeX

@inproceedings{townsend2020iclr-hilloc,
  title     = {{HiLLoC: Lossless Image Compression with Hierarchical Latent Variable Models}},
  author    = {Townsend, James and Bird, Thomas and Kunze, Julius and Barber, David},
  booktitle = {International Conference on Learning Representations},
  year      = {2020},
  url       = {https://mlanthology.org/iclr/2020/townsend2020iclr-hilloc/}
}