Text + Sketch: Image Compression at Ultra Low Rates
Abstract
Recent advances in text-to-image generative models provide the ability to generate high-quality images from short text descriptions. These foundation models, when pre-trained on billion-scale datasets, are effective for various downstream tasks with little or no further training. A natural question to ask is how such models may be adapted for image compression. We investigate several techniques in which the pre-trained models can be directly used to implement compression schemes targeting novel low rate regimes. We show how text descriptions can be used in conjunction with side information to generate high-fidelity reconstructions that preserve both semantics and spatial structure of the original. We demonstrate that at very low bit-rates, our method can significantly improve upon learned compressors in terms of perceptual and semantic fidelity, despite no end-to-end training.
Cite
Text
Lei et al. "Text + Sketch: Image Compression at Ultra Low Rates." ICML 2023 Workshops: NCW, 2023.Markdown
[Lei et al. "Text + Sketch: Image Compression at Ultra Low Rates." ICML 2023 Workshops: NCW, 2023.](https://mlanthology.org/icmlw/2023/lei2023icmlw-text/)BibTeX
@inproceedings{lei2023icmlw-text,
title = {{Text + Sketch: Image Compression at Ultra Low Rates}},
author = {Lei, Eric and Uslu, Yigit Berkay and Hassani, Hamed and Bidokhti, Shirin Saeedi},
booktitle = {ICML 2023 Workshops: NCW},
year = {2023},
url = {https://mlanthology.org/icmlw/2023/lei2023icmlw-text/}
}