K-ON: Stacking Knowledge on the Head Layer of Large Language Model
Abstract
Recent advancements in large language models (LLMs) have significantly improved various natural language processing (NLP) tasks. Typically, LLMs are trained to predict the next token, aligning well with many NLP tasks. However, in knowledge graph (KG) scenarios, entities are the fundamental units and identifying an entity requires at least several tokens. This leads to a granularity mismatch between KGs and natural languages. To address this issue, we propose K-ON, which integrates KG knowledge into the LLM by employing multiple head layers for next k-step prediction. K-ON can not only generate entity-level results in one step, but also enables contrastive loss against entities, which is the most powerful tool in KG representation learning. Experimental results show that K-ON outperforms state-of-the-art methods that incorporate text and even the other modalities.
Cite
Text
Guo et al. "K-ON: Stacking Knowledge on the Head Layer of Large Language Model." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I11.33278Markdown
[Guo et al. "K-ON: Stacking Knowledge on the Head Layer of Large Language Model." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/guo2025aaai-k/) doi:10.1609/AAAI.V39I11.33278BibTeX
@inproceedings{guo2025aaai-k,
title = {{K-ON: Stacking Knowledge on the Head Layer of Large Language Model}},
author = {Guo, Lingbing and Zhang, Yichi and Bo, Zhongpu and Chen, Zhuo and Sun, Mengshu and Zhang, Zhiqiang and Zhang, Wen and Chen, Huajun},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2025},
pages = {11745-11753},
doi = {10.1609/AAAI.V39I11.33278},
url = {https://mlanthology.org/aaai/2025/guo2025aaai-k/}
}