Fundamental Limits of Prompt Compression: A Rate-Distortion Framework for Black-Box Language Models

Abstract

We formalize the problem of prompt compression for large language models (LLMs) and present a framework to unify token-level prompt compression methods which create hard prompts for black-box models. We derive the distortion-rate function for this setup as a linear program, and provide an efficient algorithm to compute this fundamental limit via the dual of the linear program. Using the distortion-rate function as the baseline, we study the performance of existing compression schemes on a synthetic dataset consisting of prompts generated from a Markov chain, natural language queries, and their respective answers. Our empirical analysis demonstrates the criticality of query-aware prompt compression, where the compressor has knowledge of the downstream task/query for the black-box LLM. We show that there is a large gap between the performance of current prompt compression methods and the optimal strategy, and propose Adaptive QuerySelect, a query-aware, variable-rate adaptation of a prior work to close the gap. We extend our experiments to a small natural language dataset to further confirm our findings on our synthetic dataset.

Cite

Text

Nagle et al. "Fundamental Limits of Prompt Compression: A Rate-Distortion Framework for Black-Box Language Models." Neural Information Processing Systems, 2024. doi:10.52202/079017-3009

Markdown

[Nagle et al. "Fundamental Limits of Prompt Compression: A Rate-Distortion Framework for Black-Box Language Models." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/nagle2024neurips-fundamental/) doi:10.52202/079017-3009

BibTeX

@inproceedings{nagle2024neurips-fundamental,
  title     = {{Fundamental Limits of Prompt Compression: A Rate-Distortion Framework for Black-Box Language Models}},
  author    = {Nagle, Alliot and Girish, Adway and Bondaschi, Marco and Gastpar, Michael and Makkuva, Ashok Vardhan and Kim, Hyeji},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-3009},
  url       = {https://mlanthology.org/neurips/2024/nagle2024neurips-fundamental/}
}