Toward Understanding In-Context vs. In-Weight Learning
Abstract
It has recently been demonstrated empirically that in-context learning emerges in transformers when certain distributional properties are present in the training data, but this ability can also diminish upon further training. We provide a new theoretical understanding of these phenomena by identifying simplified distributional properties that give rise to the emergence and eventual disappearance of in-context learning. We do so by first analyzing a simplified model that uses a gating mechanism to choose between an in-weight and an in-context predictor. Through a combination of a generalization error and regret analysis we identify conditions where in-context and in-weight learning emerge. These theoretical findings are then corroborated experimentally by comparing the behaviour of a full transformer on the simplified distributions to that of the stylized model, demonstrating aligned results. We then extend the study to a full large language model, showing how fine-tuning on various collections of natural language prompts can elicit similar in-context and in-weight learning behaviour.
Cite
Text
Chan et al. "Toward Understanding In-Context vs. In-Weight Learning." International Conference on Learning Representations, 2025.Markdown
[Chan et al. "Toward Understanding In-Context vs. In-Weight Learning." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/chan2025iclr-understanding/)BibTeX
@inproceedings{chan2025iclr-understanding,
title = {{Toward Understanding In-Context vs. In-Weight Learning}},
author = {Chan, Bryan and Chen, Xinyi and György, András and Schuurmans, Dale},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/chan2025iclr-understanding/}
}