Demonstrations in In-Context Learning for LLMs with Large Label Space

Abstract

In-context learning (ICL) can solve new tasks on pre-trained Large Language Models (LLMs) given a few demonstrations as input. However, so far there is little understanding of how many demonstrations are required for the real-world scenario, e.g., large-label-space classification. In this work, we conduct a meticulous study under various settings with different LLMs among datasets. Our insights suggest that no demonstrations might be required, especially when the class names are descriptive and the model is strong-performing (e.g., GPT-4). Nevertheless, datasets with extremely large label space can benefit with additional human-created demonstrations, while automatically generated ones might not yield additional benefits.

Cite

Text

Li et al. "Demonstrations in In-Context Learning for LLMs with Large Label Space." ICML 2024 Workshops: LCFM, 2024.

Markdown

[Li et al. "Demonstrations in In-Context Learning for LLMs with Large Label Space." ICML 2024 Workshops: LCFM, 2024.](https://mlanthology.org/icmlw/2024/li2024icmlw-demonstrations/)

BibTeX

@inproceedings{li2024icmlw-demonstrations,
  title     = {{Demonstrations in In-Context Learning for LLMs with Large Label Space}},
  author    = {Li, Zhan and Liu, Fanghui and Cevher, Volkan and Chrysos, Grigorios},
  booktitle = {ICML 2024 Workshops: LCFM},
  year      = {2024},
  url       = {https://mlanthology.org/icmlw/2024/li2024icmlw-demonstrations/}
}