Re-Ex: Revising After Explanation Reduces the Factual Errors in LLM Responses
Abstract
Mitigating hallucination issues is a key challenge that must be overcome to reliably deploy large language models (LLMs) in real-world scenarios. Recently, various methods have been proposed to detect and revise factual errors in LLM-generated texts, in order to reduce hallucination. In this paper, we propose Re-Ex, a method for post-editing LLM-generated responses. Re-Ex introduces a novel reasoning step dubbed as the factual error explanation step. Re-Ex revises the initial response of LLMs using 3-steps : first, external tools are used to retrieve the evidences of the factual errors in the initial LLM response; next, LLM is instructed to explain the problematic parts of the response based on the gathered evidence; finally, LLM revises the initial response using the explanations provided in the previous step. In addition to the explanation step, Re-Ex also incorporates new prompting techniques to reduce the token count and inference time required for the response revision process. Compared with existing methods including FacTool, CoVE, and RARR, Re-Ex provides better detection and revision performance with less inference time and fewer tokens in multiple benchmarks.
Cite
Text
Kim et al. "Re-Ex: Revising After Explanation Reduces the Factual Errors in LLM Responses." ICLR 2024 Workshops: R2-FM, 2024.Markdown
[Kim et al. "Re-Ex: Revising After Explanation Reduces the Factual Errors in LLM Responses." ICLR 2024 Workshops: R2-FM, 2024.](https://mlanthology.org/iclrw/2024/kim2024iclrw-reex/)BibTeX
@inproceedings{kim2024iclrw-reex,
title = {{Re-Ex: Revising After Explanation Reduces the Factual Errors in LLM Responses}},
author = {Kim, Juyeon and Lee, Jeongeun and Chang, YoonHo and Choi, Chanyeol and Kim, Jun-Seong and Sohn, Jy-yong},
booktitle = {ICLR 2024 Workshops: R2-FM},
year = {2024},
url = {https://mlanthology.org/iclrw/2024/kim2024iclrw-reex/}
}