Comparative Analysis of Demonstration Selection Algorithms for In-Context Learning in Large Language Models (Student Abstract)

Abstract

Demonstration selection algorithms play a crucial role in optimizing Large Language Models' (LLMs) in-context learning performance. Despite numerous proposed algorithms, their comparative effectiveness remains understudied. We present a comprehensive evaluation of six state-of-the-art demonstration selection algorithms across five datasets, examining both their effectiveness and computational efficiency. Our findings reveal significant trade-offs: while some demonstration selection algorithms achieve superior accuracy, they incur substantial computational costs. We also discover that increasing demonstration examples doesn't consistently improve performance, and some sophisticated algorithms struggle to outperform random selection in certain scenarios. These insights provide valuable benchmarks for future algorithm development and practical implementation. Our code is available at https://github.com/Tizzzzy/Demonstration_Selection_Overview.

Cite

Text

Shu and Du. "Comparative Analysis of Demonstration Selection Algorithms for In-Context Learning in Large Language Models (Student Abstract)." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I28.35299

Markdown

[Shu and Du. "Comparative Analysis of Demonstration Selection Algorithms for In-Context Learning in Large Language Models (Student Abstract)." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/shu2025aaai-comparative/) doi:10.1609/AAAI.V39I28.35299

BibTeX

@inproceedings{shu2025aaai-comparative,
  title     = {{Comparative Analysis of Demonstration Selection Algorithms for In-Context Learning in Large Language Models (Student Abstract)}},
  author    = {Shu, Dong and Du, Mengnan},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {29490-29492},
  doi       = {10.1609/AAAI.V39I28.35299},
  url       = {https://mlanthology.org/aaai/2025/shu2025aaai-comparative/}
}