GraphChain: Large Language Models for Large-Scale Graph Analysis via Tool Chaining
Abstract
Large Language Models (LLMs) face significant limitations when applied to large-scale graphs, struggling with context constraints and inflexible reasoning. We introduce GraphChain, a novel framework enabling LLMs to analyze large graphs by orchestrating dynamic sequences of specialized tools, mimicking human exploratory processes. GraphChain incorporates two core technical contributions: (1) Progressive Graph Distillation, a reinforcement learning approach that learns to generate tool sequences balancing task relevance and intermediate state compression, thereby overcoming LLM context limitations. (2) Structure-aware Test-Time Adaptation (STTA), a mechanism using a lightweight, self-supervised adapter conditioned on graph spectral properties to efficiently adapt a frozen LLM policy to diverse graph structures via soft prompts without retraining. Experiments show GraphChain significantly outperforms prior methods, enabling scalable and adaptive LLM-driven graph analysis.
Cite
Text
Wei et al. "GraphChain: Large Language Models for Large-Scale Graph Analysis via Tool Chaining." Advances in Neural Information Processing Systems, 2025.Markdown
[Wei et al. "GraphChain: Large Language Models for Large-Scale Graph Analysis via Tool Chaining." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/wei2025neurips-graphchain/)BibTeX
@inproceedings{wei2025neurips-graphchain,
title = {{GraphChain: Large Language Models for Large-Scale Graph Analysis via Tool Chaining}},
author = {Wei, Chunyu and Hu, Wenji and Hao, Xingjia and Wang, Xin and Yang, Yifan and Wang, Yunhai and Tian, Yang and Chen, Yueguo},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/wei2025neurips-graphchain/}
}