Fundamental Limits of Prompt Tuning Transformers: Universality, Capacity and Efficiency

Abstract

We investigate the statistical and computational limits of prompt tuning for transformer-based foundation models. Our key contributions are that prompt tuning on *single-head* transformers with only a *single* self-attention layer: (i) is universal, and (ii) supports efficient (even almost-linear time) algorithms under the Strong Exponential Time Hypothesis (SETH). Statistically, we prove that prompt tuning on such the simplest possible transformers are universal approximators for sequence-to-sequence Lipschitz functions. In addition, we provide an exponential-in-$dL$ and -in-$(1/\epsilon)$ lower bound on the required soft-prompt tokens for prompt tuning to memorize any dataset with 1-layer, 1-head transformers. Computationally, we identify a phase transition in the efficiency of prompt tuning, determined by the norm of the *soft-prompt-induced* keys and queries, and provide an upper bound criterion. Beyond this criterion, no sub-quadratic (efficient) algorithm for prompt tuning exists under SETH. Within this criterion, we showcase our theory by proving the existence of almost-linear time prompt tuning inference algorithms. These fundamental limits provide important necessary conditions for designing expressive and efficient prompt tuning methods for practitioners.

Cite

Text

Hu et al. "Fundamental Limits of Prompt Tuning Transformers: Universality, Capacity and Efficiency." International Conference on Learning Representations, 2025.

Markdown

[Hu et al. "Fundamental Limits of Prompt Tuning Transformers: Universality, Capacity and Efficiency." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/hu2025iclr-fundamental/)

BibTeX

@inproceedings{hu2025iclr-fundamental,
  title     = {{Fundamental Limits of Prompt Tuning Transformers: Universality, Capacity and Efficiency}},
  author    = {Hu, Jerry Yao-Chieh and Wang, Wei-Po and Gilani, Ammar and Li, Chenyang and Song, Zhao and Liu, Han},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/hu2025iclr-fundamental/}
}