GenAudit: Fixing Factual Errors in Language Model Outputs with Evidence

Abstract

LLMs can generate factually incorrect statements even when provided access to reference documents. Such errors can be dangerous in high-stakes applications (e.g., document-grounded QA for healthcare or finance). We present GenAudit --- a tool intended to assist fact-checking LLM responses for document-grounded tasks. GenAudit suggests edits to the LLM response by revising or removing claims that are not supported by the reference document, and also presents evidence from the reference for facts that do appear to have support. We train models to execute these tasks, and design an interactive interface to present suggested edits and evidence to users. Comprehensive evaluation by human raters shows that GenAudit can detect errors in 8 different LLM outputs when summarizing documents from diverse domains. To ensure that most errors are flagged by the system, we propose a method that can increase the error recall while minimizing impact on precision. We release our tool (GenAudit) and fact-checking model for public use.

Cite

Text

Krishna et al. "GenAudit: Fixing Factual Errors in Language Model Outputs with Evidence." NeurIPS 2024 Workshops: SafeGenAi, 2024.

Markdown

[Krishna et al. "GenAudit: Fixing Factual Errors in Language Model Outputs with Evidence." NeurIPS 2024 Workshops: SafeGenAi, 2024.](https://mlanthology.org/neuripsw/2024/krishna2024neuripsw-genaudit/)

BibTeX

@inproceedings{krishna2024neuripsw-genaudit,
  title     = {{GenAudit: Fixing Factual Errors in Language Model Outputs with Evidence}},
  author    = {Krishna, Kundan and Ramprasad, Sanjana and Gupta, Prakhar and Wallace, Byron C and Lipton, Zachary Chase and Bigham, Jeffrey P.},
  booktitle = {NeurIPS 2024 Workshops: SafeGenAi},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/krishna2024neuripsw-genaudit/}
}