Universal Rate-Distortion-Perception Representations for Lossy Compression
Abstract
In the context of lossy compression, \citet{blau2019rethinking} adopt a mathematical notion of perceptual quality and define the rate-distortion-perception function, generalizing the classical rate-distortion tradeoff. We consider the notion of (approximately) universal representations in which one may fix an encoder and vary the decoder to (approximately) achieve any point along the perception-distortion tradeoff. We show that the penalty for fixing the encoder is zero in the Gaussian case, and give bounds in the case of arbitrary distributions, under MSE distortion and $W_2^2(\cdot,\cdot)$ perception losses. In principle, a small penalty refutes the need to design an end-to-end system for each particular objective. We provide experimental results on MNIST and SVHN to suggest that there exist practical constructions that suffer only a small penalty, i.e. machine learning models learn representation maps which are approximately universal within their operational capacities.
Cite
Text
Zhang et al. "Universal Rate-Distortion-Perception Representations for Lossy Compression." ICLR 2021 Workshops: Neural_Compression, 2021.Markdown
[Zhang et al. "Universal Rate-Distortion-Perception Representations for Lossy Compression." ICLR 2021 Workshops: Neural_Compression, 2021.](https://mlanthology.org/iclrw/2021/zhang2021iclrw-universal/)BibTeX
@inproceedings{zhang2021iclrw-universal,
title = {{Universal Rate-Distortion-Perception Representations for Lossy Compression}},
author = {Zhang, George and Chen, Jun and Khisti, Ashish J},
booktitle = {ICLR 2021 Workshops: Neural_Compression},
year = {2021},
url = {https://mlanthology.org/iclrw/2021/zhang2021iclrw-universal/}
}