Transformers Are Minimax Optimal Nonparametric In-Context Learners
Abstract
We study the efficacy of in-context learning (ICL) from the viewpoint of statistical learning theory. We develop approximation and generalization analyses for a transformer composed of a deep neural network and one linear attention layer, pretrained on nonparametric regression tasks sampled from general function spaces including the Besov space and piecewise $\gamma$-smooth class. In particular, we show that sufficiently trained transformers can achieve -- and even improve upon -- the minimax optimal estimation risk in context by encoding the most relevant basis representations during pretraining. Our analysis extends to high-dimensional or sequential data and distinguishes the \emph{pretraining} and \emph{in-context} generalization gaps, establishing upper and lower bounds w.r.t. both the number of tasks and in-context examples. These findings shed light on the effectiveness of few-shot prompting and the roles of task diversity and representation learning for ICL.
Cite
Text
Kim et al. "Transformers Are Minimax Optimal Nonparametric In-Context Learners." ICML 2024 Workshops: TF2M, 2024.Markdown
[Kim et al. "Transformers Are Minimax Optimal Nonparametric In-Context Learners." ICML 2024 Workshops: TF2M, 2024.](https://mlanthology.org/icmlw/2024/kim2024icmlw-transformers-b/)BibTeX
@inproceedings{kim2024icmlw-transformers-b,
title = {{Transformers Are Minimax Optimal Nonparametric In-Context Learners}},
author = {Kim, Juno and Nakamaki, Tai and Suzuki, Taiji},
booktitle = {ICML 2024 Workshops: TF2M},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/kim2024icmlw-transformers-b/}
}