Debate on Graph: A Flexible and Reliable Reasoning Framework for Large Language Models

Abstract

Large Language Models (LLMs) may suffer from hallucinations in real-world applications due to the lack of relevant knowledge. In contrast, knowledge graphs encompass extensive, multi-relational structures that store a vast array of symbolic facts. Consequently, integrating LLMs with knowledge graphs has been extensively explored, with Knowledge Graph Question Answering (KGQA) serving as a critical touchstone for the integration. This task requires LLMs to answer natural language questions by retrieving relevant triples from knowledge graphs. However, existing methods face two significant challenges: *excessively long reasoning paths distracting from the answer generation*, and *false-positive relations hindering the path refinement*. In this paper, we propose an iterative interactive KGQA framework that leverages the interactive learning capabilities of LLMs to perform reasoning and Debating over Graphs (DoG). Specifically, DoG employs a subgraph-focusing mechanism, allowing LLMs to perform answer trying after each reasoning step, thereby mitigating the impact of lengthy reasoning paths. On the other hand, DoG utilizes a multi-role debate team to gradually simplify complex questions, reducing the influence of false-positive relations. This debate mechanism ensures the reliability of the reasoning process. Experimental results on five public datasets demonstrate the effectiveness and superiority of our architecture. Notably, DoG outperforms the state-of-the-art method ToG by 23.7% and 9.1% in accuracy on WebQuestions and GrailQA, respectively. Furthermore, the integration experiments with various LLMs on the mentioned datasets highlight the flexibility of DoG.

Cite

Text

Ma et al. "Debate on Graph: A Flexible and Reliable Reasoning Framework for Large Language Models." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I23.34658

Markdown

[Ma et al. "Debate on Graph: A Flexible and Reliable Reasoning Framework for Large Language Models." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/ma2025aaai-debate/) doi:10.1609/AAAI.V39I23.34658

BibTeX

@inproceedings{ma2025aaai-debate,
  title     = {{Debate on Graph: A Flexible and Reliable Reasoning Framework for Large Language Models}},
  author    = {Ma, Jie and Gao, Zhitao and Chai, Qi and Sun, Wangchun and Wang, Pinghui and Pei, Hongbin and Tao, Jing and Song, Lingyun and Liu, Jun and Zhang, Chen and Cui, Lizhen},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {24768-24776},
  doi       = {10.1609/AAAI.V39I23.34658},
  url       = {https://mlanthology.org/aaai/2025/ma2025aaai-debate/}
}