Assessing Biomedical Knowledge Robustness in Large Language Models by Query-Efficient Sampling Attacks
Abstract
The increasing depth of parametric domain knowledge in large language models (LLMs) is fueling their rapid deployment in real-world applications. Understanding model vulnerabilities in high-stakes and knowledge-intensive tasks is essential to quantifying the trustworthiness of model predictions and regulating model use. The recent discovery of named entities as adversarial examples (i.e. adversarial entities) in natural language processing tasks raises questions about their potential impact on the knowledge robustness of pre-trained and finetuned LLMs in high-stakes and specialized domains. We examined the use of type-consistent entity substitution as a template for collecting adversarial entities for medium-sized billion-parameter LLMs with biomedical knowledge. To this end, we developed an embedding space, gradient-free attack based on powerscaled distance-weighted sampling for robustness evaluation, which has a low query budget and controllable coverage. Our method has favorable query efficiency and scaling over alternative approaches based on blackbox gradient-guided search, which we demonstrated for adversarial distractor generation in biomedical question answering. Subsequent failure mode analysis uncovered two regimes of adversarial entities on the attack surface with distinct characteristics. We also showed that entity substitution attacks can manipulate token-wise Shapley value explanations, which become deceptive in this setting. Our approach complements standard evaluations for high-capacity models and the results highlight the brittleness of domain knowledge in LLMs.
Cite
Text
Xian et al. "Assessing Biomedical Knowledge Robustness in Large Language Models by Query-Efficient Sampling Attacks." Transactions on Machine Learning Research, 2024.Markdown
[Xian et al. "Assessing Biomedical Knowledge Robustness in Large Language Models by Query-Efficient Sampling Attacks." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/xian2024tmlr-assessing/)BibTeX
@article{xian2024tmlr-assessing,
title = {{Assessing Biomedical Knowledge Robustness in Large Language Models by Query-Efficient Sampling Attacks}},
author = {Xian, Rui Patrick and Lee, Alex Jihun and Lolla, Satvik and Wang, Vincent and Ro, Russell and Cui, Qiming and Abbasi-Asl, Reza},
journal = {Transactions on Machine Learning Research},
year = {2024},
url = {https://mlanthology.org/tmlr/2024/xian2024tmlr-assessing/}
}