Fundamental Limits of Prompt Compression: A Rate-Distortion Framework for Black-Box Language Models
Abstract
We formalize the problem of token-level hard prompt compression for black-box large language models (LLMs). We derive the distortion-rate function for this setup as a linear program, and provide an efficient algorithm to compute this fundamental limit via its dual. We compare the performance of existing compression schemes with this fundamental limit on a synthetic dataset consisting of prompts generated from a Markov chain, natural language queries, and their respective answers. Our empirical analysis demonstrates the criticality of the compressor being aware of the downstream task/query for the black-box. We observe a large gap between the performance of current prompt compression methods and the optimal strategy, and propose a query-aware, variable-rate adaptation of a prior work to close the gap.
Cite
Text
Girish et al. "Fundamental Limits of Prompt Compression: A Rate-Distortion Framework for Black-Box Language Models." ICML 2024 Workshops: TF2M, 2024.Markdown
[Girish et al. "Fundamental Limits of Prompt Compression: A Rate-Distortion Framework for Black-Box Language Models." ICML 2024 Workshops: TF2M, 2024.](https://mlanthology.org/icmlw/2024/girish2024icmlw-fundamental/)BibTeX
@inproceedings{girish2024icmlw-fundamental,
title = {{Fundamental Limits of Prompt Compression: A Rate-Distortion Framework for Black-Box Language Models}},
author = {Girish, Adway and Nagle, Alliot and Makkuva, Ashok Vardhan and Bondaschi, Marco and Gastpar, Michael and Kim, Hyeji},
booktitle = {ICML 2024 Workshops: TF2M},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/girish2024icmlw-fundamental/}
}