Had Enough of Experts? Elicitation and Evaluation of Bayesian Priors from Large Language Models

Abstract

Large language models (LLMs) have been extensively studied for their abilities to generate convincing natural language sequences, however their utility for quantitative information retrieval is less well understood. Here we explore the feasibility of LLMs as a mechanism for quantitative knowledge retrieval to aid elicitation of expert-informed prior distributions for Bayesian statistical models. We present a prompt engineering framework, treating an LLM as an interface to scholarly literature, comparing responses in different contexts and domains against more established approaches. We discuss the implications and challenges of treating LLMs as 'experts'.

Cite

Text

Selby et al. "Had Enough of Experts? Elicitation and Evaluation of Bayesian Priors from Large Language Models." NeurIPS 2024 Workshops: BDU, 2024.

Markdown

[Selby et al. "Had Enough of Experts? Elicitation and Evaluation of Bayesian Priors from Large Language Models." NeurIPS 2024 Workshops: BDU, 2024.](https://mlanthology.org/neuripsw/2024/selby2024neuripsw-enough/)

BibTeX

@inproceedings{selby2024neuripsw-enough,
  title     = {{Had Enough of Experts? Elicitation and Evaluation of Bayesian Priors from Large Language Models}},
  author    = {Selby, David Antony and Spriestersbach, Kai and Iwashita, Yuichiro and Bappert, Dennis and Warrier, Archana and Mukherjee, Sumantrak and Asim, Muhammad Nabeel and Kise, Koichi and Vollmer, Sebastian Josef},
  booktitle = {NeurIPS 2024 Workshops: BDU},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/selby2024neuripsw-enough/}
}