In-Context Learning with Retrieved Demonstrations for Language Models: A Survey

Abstract

Large language models have demonstrated remarkable few-shot in-context learning (ICL) capabilities, adapting to new tasks with few-shots demonstrations. However, the efficacy of ICL is highly dependent on the selection of these demonstrations. Recent developments have introduced retrieval-based in-context learning (RetICL), which dynamically retrieves demonstrations tailored to each input query. This approach leverages existing databases and retrieval systems, enhancing efficiency and scalability while mitigating biases inherent in manual example selection. Given the promising results and growing interest in RetICL, we present a comprehensive survey of this field. Our review encompasses: design choices for ICL demonstration retrieval models, retrieval training procedures, inference strategies and current applications of RetICL. In the end, we explore future directions for this emerging technology.

Cite

Text

Luo et al. "In-Context Learning with Retrieved Demonstrations for Language Models: A Survey." Transactions on Machine Learning Research, 2024.

Markdown

[Luo et al. "In-Context Learning with Retrieved Demonstrations for Language Models: A Survey." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/luo2024tmlr-incontext/)

BibTeX

@article{luo2024tmlr-incontext,
  title     = {{In-Context Learning with Retrieved Demonstrations for Language Models: A Survey}},
  author    = {Luo, Man and Xu, Xin and Liu, Yue and Pasupat, Panupong and Kazemi, Mehran},
  journal   = {Transactions on Machine Learning Research},
  year      = {2024},
  url       = {https://mlanthology.org/tmlr/2024/luo2024tmlr-incontext/}
}