Rambla: A Framework for Evaluating the Reliability of LLMs as Assistants in the Biomedical Domain
Abstract
Large Language Models (LLMs) increasingly support applications in a wide range of domains, some with potential high societal impact such as biomedicine, yet their reliability in realistic use cases is under-researched. In this work we introduce the Reliability AssesMent for Biomedical LLM Assistants ($\texttt{RAmBLA}$) framework and evaluate whether four state-of-the-art foundation LLMs can serve as reliable assistants in the biomedical domain. We identify prompt robustness, high recall, and a lack of hallucinations as necessary criteria for this use case. We design shortform tasks and tasks requiring LLM freeform responses mimicking real-world user interactions. We evaluate LLM performance using semantic similarity with a ground truth response, through an evaluator LLM.
Cite
Text
Bolton et al. "Rambla: A Framework for Evaluating the Reliability of LLMs as Assistants in the Biomedical Domain." ICLR 2024 Workshops: R2-FM, 2024.Markdown
[Bolton et al. "Rambla: A Framework for Evaluating the Reliability of LLMs as Assistants in the Biomedical Domain." ICLR 2024 Workshops: R2-FM, 2024.](https://mlanthology.org/iclrw/2024/bolton2024iclrw-rambla/)BibTeX
@inproceedings{bolton2024iclrw-rambla,
title = {{Rambla: A Framework for Evaluating the Reliability of LLMs as Assistants in the Biomedical Domain}},
author = {Bolton, William James and Poyiadzi, Rafael and Morrell, Edward and van Bergen Gonzalez Bueno, Gabriela and Goetz, Lea},
booktitle = {ICLR 2024 Workshops: R2-FM},
year = {2024},
url = {https://mlanthology.org/iclrw/2024/bolton2024iclrw-rambla/}
}