NAAM: Node-Aware Attention Mechanism for Distilling GNNs-to-MLP (Student Abstract)
Abstract
Recently, researchers have focused on methods that not only distill knowledge from a Graph Neural Network (GNN) into a Multi-Layer Perceptron (MLP) but also leverage multiple teacher GNNs. However, existing methods assign a single attention weight to each teacher GNN. We propose a NodeAware Attention Mechanism (NAAM) that flexibly adjusts the attention weight for each node to leverage multiple GNNs fully. Experimental results show that NAAM outperforms existing GNN-to-MLP methods. our source code is available at: https://github.com/NakayamaItsuki/NAAM.
Cite
Text
Nakayama and Onizuka. "NAAM: Node-Aware Attention Mechanism for Distilling GNNs-to-MLP (Student Abstract)." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I28.35278Markdown
[Nakayama and Onizuka. "NAAM: Node-Aware Attention Mechanism for Distilling GNNs-to-MLP (Student Abstract)." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/nakayama2025aaai-naam/) doi:10.1609/AAAI.V39I28.35278BibTeX
@inproceedings{nakayama2025aaai-naam,
title = {{NAAM: Node-Aware Attention Mechanism for Distilling GNNs-to-MLP (Student Abstract)}},
author = {Nakayama, Itsuki and Onizuka, Makoto},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2025},
pages = {29433-29435},
doi = {10.1609/AAAI.V39I28.35278},
url = {https://mlanthology.org/aaai/2025/nakayama2025aaai-naam/}
}