Hierarchical Language Model Design for Interpretable Graph Reasoning
Abstract
Large language models (LLMs) are being increasingly explored for graph tasks. Despite their remarkable success in text-based tasks, LLMs' capabilities in understanding explicit graph structures remain limited, particularly with large graphs. In this work, we introduce Hierarchical Language Model for Graph (HLM-G), which employs a two-block architecture to capture node-centric local information and interaction-centric global structure, effectively enhancing graph structure understanding abilities. The proposed scheme allows LLMs to address various graph queries with high efficacy, efficiency, and robustness, while reducing computational costs on large-scale graph tasks. Furthermore, we demonstrate the interpretability of our model using intrinsic attention weights and established explainers. Comprehensive evaluations across diverse graph reasoning and real-world tasks of node, link, and graph-levels highlight the superiority of our method, marking a significant advancement in the application of LLMs to graph understanding.
Cite
Text
Khurana et al. "Hierarchical Language Model Design for Interpretable Graph Reasoning." Transactions on Machine Learning Research, 2025.Markdown
[Khurana et al. "Hierarchical Language Model Design for Interpretable Graph Reasoning." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/khurana2025tmlr-hierarchical/)BibTeX
@article{khurana2025tmlr-hierarchical,
title = {{Hierarchical Language Model Design for Interpretable Graph Reasoning}},
author = {Khurana, Sambhav and Li, Xiner and Gui, Shurui and Ji, Shuiwang},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://mlanthology.org/tmlr/2025/khurana2025tmlr-hierarchical/}
}