Specialization-Generalization Transition in Exemplar-Based In-Context Learning
Abstract
In-context learning (ICL) is a striking behavior seen in pretrained transformers that allows models to generalize to unseen tasks after seeing only a few examples. We investigate empirically the conditions necessary on the pretraining distribution for ICL to emerge. Previous work has focused on the number of distinct tasks necessary in the pretraining distribution – here, we use a different notion of task diversity to study the emergence of ICL in transformers trained on linear functions. We find that as task diversity increases, transformers undergo a transition from a specialized solution, which exhibits ICL only within the pretraining distribution, to a solution which generalizes out of distribution to the entire task space. We also investigate the nature of the solutions learned by the transformer on both sides of the transition, and observe similar transitions in nonlinear regression problems.
Cite
Text
Goddard et al. "Specialization-Generalization Transition in Exemplar-Based In-Context Learning." NeurIPS 2024 Workshops: SciForDL, 2024.Markdown
[Goddard et al. "Specialization-Generalization Transition in Exemplar-Based In-Context Learning." NeurIPS 2024 Workshops: SciForDL, 2024.](https://mlanthology.org/neuripsw/2024/goddard2024neuripsw-specializationgeneralization/)BibTeX
@inproceedings{goddard2024neuripsw-specializationgeneralization,
title = {{Specialization-Generalization Transition in Exemplar-Based In-Context Learning}},
author = {Goddard, Chase and Smith, Lindsay M. and Ngampruetikorn, Vudtiwat and Schwab, David J.},
booktitle = {NeurIPS 2024 Workshops: SciForDL},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/goddard2024neuripsw-specializationgeneralization/}
}