Enhancing Relation Extraction via Supervised Rationale Verification and Feedback
Abstract
Despite the rapid progress that existing automated feedback methods have made in correcting the output of large language models (LLMs), these methods cannot be well applied to the relation extraction (RE) task due to their designated feedback objectives and correction manner. To address this problem, we propose a novel automated feedback framework for RE, which presents a rationale supervisor to verify the rationale and provides re-selected demonstrations as feedback to correct the initial prediction. Specifically, we first design a causal intervention and observation method to collect biased/unbiased rationales for contrastive training the rationale supervisor. Then, we present a verification-feedback-correction procedure to iteratively enhance LLMs' capability of handling the RE task. Extensive experiments prove that our proposed framework significantly outperforms existing methods.
Cite
Text
Li et al. "Enhancing Relation Extraction via Supervised Rationale Verification and Feedback." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I23.34631Markdown
[Li et al. "Enhancing Relation Extraction via Supervised Rationale Verification and Feedback." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/li2025aaai-enhancing/) doi:10.1609/AAAI.V39I23.34631BibTeX
@inproceedings{li2025aaai-enhancing,
title = {{Enhancing Relation Extraction via Supervised Rationale Verification and Feedback}},
author = {Li, Yongqi and Miao, Xin and Zhou, Shen and Xu, Mayi and Ren, Yuyang and Qian, Tieyun},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2025},
pages = {24521-24529},
doi = {10.1609/AAAI.V39I23.34631},
url = {https://mlanthology.org/aaai/2025/li2025aaai-enhancing/}
}