Lower Bounding Rate-Distortion from Samples
Abstract
The rate-distortion function tells us the minimal number of bits on average to compress a random object within a given distortion tolerance. A lower bound on the rate-distortion function therefore represents a fundamental limit on the best possible rate-distortion performance of any lossy compression algorithm, and can help us assess the potential room for improvement. We make a first attempt at an algorithm for estimating such a lower bound from data samples, applicable to general memoryless data sources. Based on a dual characterization of $R(D)$ (Csiszár, 1974), our method solves a constrained maximization problem over a family of functions parameterized by neural networks. On a 2D Gaussian source, we obtain a lower bound within 1 bit of the analytical rate-distortion function. Our code can be found at https://github.com/mandt-lab/empirical-RD-sandwich.
Cite
Text
Yang and Mandt. "Lower Bounding Rate-Distortion from Samples." ICLR 2021 Workshops: Neural_Compression, 2021.Markdown
[Yang and Mandt. "Lower Bounding Rate-Distortion from Samples." ICLR 2021 Workshops: Neural_Compression, 2021.](https://mlanthology.org/iclrw/2021/yang2021iclrw-lower/)BibTeX
@inproceedings{yang2021iclrw-lower,
title = {{Lower Bounding Rate-Distortion from Samples}},
author = {Yang, Yibo and Mandt, Stephan},
booktitle = {ICLR 2021 Workshops: Neural_Compression},
year = {2021},
url = {https://mlanthology.org/iclrw/2021/yang2021iclrw-lower/}
}