SECA: Semantically Equivalent and Coherent Attacks for Eliciting LLM Hallucinations
Abstract
Large Language Models (LLMs) are increasingly deployed in high-risk domains. However, state-of-the-art LLMs often exhibit hallucinations, raising serious concerns about their reliability. Prior work has explored adversarial attacks to elicit hallucinations in LLMs, but these methods often rely on unrealistic prompts, either by inserting nonsensical tokens or by altering the original semantic intent. Consequently, such approaches provide limited insight into how hallucinations arise in real-world settings. In contrast, adversarial attacks in computer vision typically involve realistic modifications to input images. However, the problem of identifying realistic adversarial prompts for eliciting LLM hallucinations remains largely underexplored. To address this gap, we propose Semantically Equivalent and Coherent Attacks (SECA), which elicit hallucinations via realistic modifications to the prompt that preserve its meaning while maintaining semantic coherence. Our contributions are threefold: (i) we formulate finding realistic attacks for hallucination elicitation as a constrained optimization problem over the input prompt space under semantic equivalence and coherence constraints; (ii) we introduce a constraint-preserving zeroth-order method to effectively search for adversarial yet feasible prompts; and (iii) we demonstrate through experiments on open-ended multiple-choice question answering tasks that SECA achieves higher attack success rates while incurring almost no semantic equivalence or semantic coherence errors compared to existing methods. SECA highlights the sensitivity of both open-source and commercial gradient-inaccessible LLMs to realistic and plausible prompt variations. Code is available at https://github.com/Buyun-Liang/SECA.
Cite
Text
Liang et al. "SECA: Semantically Equivalent and Coherent Attacks for Eliciting LLM Hallucinations." Advances in Neural Information Processing Systems, 2025.Markdown
[Liang et al. "SECA: Semantically Equivalent and Coherent Attacks for Eliciting LLM Hallucinations." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/liang2025neurips-seca/)BibTeX
@inproceedings{liang2025neurips-seca,
title = {{SECA: Semantically Equivalent and Coherent Attacks for Eliciting LLM Hallucinations}},
author = {Liang, Buyun and Peng, Liangzu and Luo, Jinqi and Thaker, Darshan and Chan, Kwan Ho Ryan and Vidal, Rene},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/liang2025neurips-seca/}
}