Improving Inference for Neural Image Compression
Abstract
We consider the problem of lossy image compression with deep latent variable models. State-of-the-art methods build on hierarchical variational autoencoders (VAEs) and learn inference networks to predict a compressible latent representation of each data point. Drawing on the variational inference perspective on compression, we identify three approximation gaps which limit performance in the conventional approach: an amortization gap, a discretization gap, and a marginalization gap. We propose remedies for each of these three limitations based on ideas related to iterative inference, stochastic annealing for discrete optimization, and bits-back coding, resulting in the first application of bits-back coding to lossy compression. In our experiments, which include extensive baseline comparisons and ablation studies, we achieve new state-of-the-art performance on lossy image compression using an established VAE architecture, by changing only the inference method.
Cite
Text
Yang et al. "Improving Inference for Neural Image Compression." Neural Information Processing Systems, 2020.Markdown
[Yang et al. "Improving Inference for Neural Image Compression." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/yang2020neurips-improving/)BibTeX
@inproceedings{yang2020neurips-improving,
title = {{Improving Inference for Neural Image Compression}},
author = {Yang, Yibo and Bamler, Robert and Mandt, Stephan},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/yang2020neurips-improving/}
}