Dynamically Pruned Message Passing Networks for Large-Scale Knowledge Graph Reasoning
Abstract
We propose Dynamically Pruned Message Passing Networks (DPMPN) for large-scale knowledge graph reasoning. In contrast to existing models, embedding-based or path-based, we learn an input-dependent subgraph to explicitly model a sequential reasoning process. Each subgraph is dynamically constructed, expanding itself selectively under a flow-style attention mechanism. In this way, we can not only construct graphical explanations to interpret prediction, but also prune message passing in Graph Neural Networks (GNNs) to scale with the size of graphs. We take the inspiration from the consciousness prior proposed by Bengio to design a two-GNN framework to encode global input-invariant graph-structured representation and learn local input-dependent one coordinated by an attention module. Experiments show the reasoning capability in our model that is providing a clear graphical explanation as well as predicting results accurately, outperforming most state-of-the-art methods in knowledge base completion tasks.
Cite
Text
Xu et al. "Dynamically Pruned Message Passing Networks for Large-Scale Knowledge Graph Reasoning." International Conference on Learning Representations, 2020.Markdown
[Xu et al. "Dynamically Pruned Message Passing Networks for Large-Scale Knowledge Graph Reasoning." International Conference on Learning Representations, 2020.](https://mlanthology.org/iclr/2020/xu2020iclr-dynamically/)BibTeX
@inproceedings{xu2020iclr-dynamically,
title = {{Dynamically Pruned Message Passing Networks for Large-Scale Knowledge Graph Reasoning}},
author = {Xu, Xiaoran and Feng, Wei and Jiang, Yunsheng and Xie, Xiaohui and Sun, Zhiqing and Deng, Zhi-Hong},
booktitle = {International Conference on Learning Representations},
year = {2020},
url = {https://mlanthology.org/iclr/2020/xu2020iclr-dynamically/}
}