In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation

Abstract

Large language models (LLMs) frequently hallucinate, yet our understanding of why they make these errors remains limited. In this study, we aim to understand the underlying mechanisms of LLM hallucinations from the perspective of inner representations. We discover a pattern associated with hallucinations: correct generations tend to have sharper context activations in the hidden states of the in-context tokens, compared to that of the incorrect generations. Leveraging this signal, we propose an entropy-based metric to quantify this “sharpness” and incorporate it into the decoding process, i.e., use the entropy value to adjust the next token prediction distribution to improve the factuality and overall quality of the generated text. Experiments on multiple benchmarks demonstrate our consistent effectiveness, e.g., up to 8.6 absolute points on TruthfulQA. We believe this study can improve our understanding of hallucinations and serve as a practical solution for hallucination mitigation.

Cite

Text

Chen et al. "In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation." ICLR 2024 Workshops: R2-FM, 2024.

Markdown

[Chen et al. "In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation." ICLR 2024 Workshops: R2-FM, 2024.](https://mlanthology.org/iclrw/2024/chen2024iclrw-incontext/)

BibTeX

@inproceedings{chen2024iclrw-incontext,
  title     = {{In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation}},
  author    = {Chen, Shiqi and Xiong, Miao and Liu, Junteng and Wu, Zhengxuan and Xiao, Teng and Gao, Siyang and He, Junxian},
  booktitle = {ICLR 2024 Workshops: R2-FM},
  year      = {2024},
  url       = {https://mlanthology.org/iclrw/2024/chen2024iclrw-incontext/}
}