Graph Reasoning Transformers for Knowledge-Aware Question Answering
Abstract
Augmenting Language Models (LMs) with structured knowledge graphs (KGs) aims to leverage structured world knowledge to enhance the capability of LMs to complete knowledge-intensive tasks. However, existing methods are unable to effectively utilize the structured knowledge in a KG due to their inability to capture the rich relational semantics of knowledge triplets. Moreover, the modality gap between natural language text and KGs has become a challenging obstacle when aligning and fusing cross-modal information. To address these challenges, we propose a novel knowledge-augmented question answering (QA) model, namely, Graph Reasoning Transformers (GRT). Different from conventional node-level methods, the GRT serves knowledge triplets as atomic knowledge and utilize a triplet-level graph encoder to capture triplet-level graph features. Furthermore, to alleviate the negative effect of the modality gap on joint reasoning, we propose a representation alignment pretraining to align the cross-modal representations and introduce a cross-modal information fusion module with attention bias to enable fine-grained information fusion. Extensive experiments conducted on three knowledge-intensive QA benchmarks show that the GRT outperforms the state-of-the-art KG-augmented QA systems, demonstrating the effectiveness and adaptation of our proposed model.
Cite
Text
Zhao et al. "Graph Reasoning Transformers for Knowledge-Aware Question Answering." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I17.29938Markdown
[Zhao et al. "Graph Reasoning Transformers for Knowledge-Aware Question Answering." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/zhao2024aaai-graph/) doi:10.1609/AAAI.V38I17.29938BibTeX
@inproceedings{zhao2024aaai-graph,
title = {{Graph Reasoning Transformers for Knowledge-Aware Question Answering}},
author = {Zhao, Ruilin and Zhao, Feng and Hu, Liang and Xu, Guandong},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2024},
pages = {19652-19660},
doi = {10.1609/AAAI.V38I17.29938},
url = {https://mlanthology.org/aaai/2024/zhao2024aaai-graph/}
}