Interpreting Temporal Knowledge Graph Reasoning (Student Abstract)
Abstract
Temporal knowledge graph reasoning is an essential task that holds immense value in diverse real-world applications. Existing studies mainly focus on leveraging structural and sequential dependencies, excelling in tasks like entity and link prediction. However, they confront a notable interpretability gap in their predictions, a pivotal facet for comprehending model behavior. In this study, we propose an innovative method, LSGAT, which not only exhibits remarkable precision in entity predictions but also enhances interpretability by identifying pivotal historical events influencing event predictions. LSGAT enables concise explanations for prediction outcomes, offering valuable insights into the otherwise enigmatic "black box" reasoning process. Through an exploration of the implications of the most influential events, it facilitates a deeper understanding of the underlying mechanisms governing predictions.
Cite
Text
Chen et al. "Interpreting Temporal Knowledge Graph Reasoning (Student Abstract)." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I21.30425Markdown
[Chen et al. "Interpreting Temporal Knowledge Graph Reasoning (Student Abstract)." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/chen2024aaai-interpreting/) doi:10.1609/AAAI.V38I21.30425BibTeX
@inproceedings{chen2024aaai-interpreting,
title = {{Interpreting Temporal Knowledge Graph Reasoning (Student Abstract)}},
author = {Chen, Bin and Yang, Kai and Tai, Wenxin and Cheng, Zhangtao and Liu, Leyuan and Zhong, Ting and Zhou, Fan},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2024},
pages = {23451-23453},
doi = {10.1609/AAAI.V38I21.30425},
url = {https://mlanthology.org/aaai/2024/chen2024aaai-interpreting/}
}