Tree-of-Reasoning Question Decomposition for Complex Question Answering with Large Language Models

Abstract

Large language models (LLMs) have recently demonstrated remarkable performance across various Natual Language Processing tasks. In the field of multi-hop reasoning, the Chain-of-thought (CoT) prompt method has emerged as a paradigm, using curated stepwise reasoning demonstrations to enhance LLM's ability to reason and produce coherent rational pathways. To ensure the accuracy, reliability, and traceability of the generated answers, many studies have incorporated information retrieval (IR) to provide LLMs with external knowledge. However, existing CoT with IR methods decomposes questions into sub-questions based on a single compositionality type, which limits their effectiveness for questions involving multiple compositionality types. Additionally, these methods suffer from inefficient retrieval, as complex questions often contain abundant information, leading to the retrieval of irrelevant information inconsistent with the query's intent. In this work, we propose a novel question decomposition framework called TRQA for multi-hop question answering, which addresses these limitations. Our framework introduces a reasoning tree (RT) to represent the structure of complex questions. It consists of four components: the Reasoning Tree Constructor (RTC), the Question Generator (QG), the Retrieval and LLM Interaction Module (RAIL), and the Answer Aggregation Module (AAM). Specifically, the RTC predicts diverse sub-question structures to construct the reasoning tree, allowing a more comprehensive representation of complex questions. The QG generates sub-questions for leaf-node in the reasoning tree, and we explore two methods for QG: prompt-based and T5-based approaches. The IR module retrieves documents aligned with sub-questions, while the LLM formulates answers based on the retrieved information. Finally, the AAM aggregates answers along the reason tree, producing a definitive response from bottom to top.

Cite

Text

Zhang et al. "Tree-of-Reasoning Question Decomposition for Complex Question Answering with Large Language Models." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I17.29928

Markdown

[Zhang et al. "Tree-of-Reasoning Question Decomposition for Complex Question Answering with Large Language Models." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/zhang2024aaai-tree/) doi:10.1609/AAAI.V38I17.29928

BibTeX

@inproceedings{zhang2024aaai-tree,
  title     = {{Tree-of-Reasoning Question Decomposition for Complex Question Answering with Large Language Models}},
  author    = {Zhang, Kun and Zeng, Jiali and Meng, Fandong and Wang, Yuanzhuo and Sun, Shiqi and Bai, Long and Shen, Huawei and Zhou, Jie},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2024},
  pages     = {19560-19568},
  doi       = {10.1609/AAAI.V38I17.29928},
  url       = {https://mlanthology.org/aaai/2024/zhang2024aaai-tree/}
}