Predictive Inference Is Really Free with In-Context Learning
Abstract
In this work, we consider the problem of constructing PIs for point predictions that are obtained using transformers. We propose a novel method for constructing PIs called in-context Jackknife+ (ICJ+), by using a meta-learned transformer trained via ICL to perform training-free leave-one-out (LOO) predictions, i.e., by only prompting the transformer with LOO datasets and no retraining. We provide distribution-free coverage guarantees for our proposed ICJ+ algorithm under mild assumptions, by leveraging the stability of in-context trained transformers. We evaluate the coverage and width of the intervals obtained using ICJ+ on synthetic i.i.d. data for five classes of functions, and observe that their performance is comparable or superior to the benchmark J+ and true confidence intervals.
Cite
Text
Mukherjee et al. "Predictive Inference Is Really Free with In-Context Learning." ICLR 2025 Workshops: QUESTION, 2025.Markdown
[Mukherjee et al. "Predictive Inference Is Really Free with In-Context Learning." ICLR 2025 Workshops: QUESTION, 2025.](https://mlanthology.org/iclrw/2025/mukherjee2025iclrw-predictive/)BibTeX
@inproceedings{mukherjee2025iclrw-predictive,
title = {{Predictive Inference Is Really Free with In-Context Learning}},
author = {Mukherjee, Sohom and Antonov, Ivane and Günder, Kai and Maichle, Magnus Josef},
booktitle = {ICLR 2025 Workshops: QUESTION},
year = {2025},
url = {https://mlanthology.org/iclrw/2025/mukherjee2025iclrw-predictive/}
}