LLM Hallucination Reasoning with Zero-Shot Knowledge Test
Abstract
LLM hallucination, where LLMs occasionally generate unfaithful text, poses significant challenges for practical applications of LLMs. Most existing detection methods require external knowledge, LLM fine-tuning, or hallucination-labeled datasets and do not distinguish between different hallucination types, which are crucial for improving detection performance. We introduce a new task, Hallucination Reasoning, which classifies LLM-generated text into one of aligned, misaligned, and fabricated. Our novel source-free zero-shot method identifies whether LLM has enough knowledge about a prompt and text. Our experiments on new datasets demonstrate the effectiveness of our method in hallucination reasoning and underscore its importance for enhancing detection performance.
Cite
Text
Lee et al. "LLM Hallucination Reasoning with Zero-Shot Knowledge Test." NeurIPS 2024 Workshops: SoLaR, 2024.Markdown
[Lee et al. "LLM Hallucination Reasoning with Zero-Shot Knowledge Test." NeurIPS 2024 Workshops: SoLaR, 2024.](https://mlanthology.org/neuripsw/2024/lee2024neuripsw-llm/)BibTeX
@inproceedings{lee2024neuripsw-llm,
title = {{LLM Hallucination Reasoning with Zero-Shot Knowledge Test}},
author = {Lee, Seongmin and Hsu, Hsiang and Chen, Chun-Fu},
booktitle = {NeurIPS 2024 Workshops: SoLaR},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/lee2024neuripsw-llm/}
}