Graph Neural Networks Formed via Layer-Wise Ensembles of Heterogeneous Base Models
Abstract
Graph Neural Networks (GNNs) with numerical node features and graph structure as inputs have demonstrated superior performance on various semi-supervised learning tasks with graph data. However, the numerical node features utilized by GNNs are commonly extracted from raw data which is of text or tabular (numeric/categorical) type in most real-world applications. The best models for such data types in most standard supervised learning settings with IID (non-graph) data are not simple neural network layers and thus are not easily incorporated into a GNN. Here we propose a robust stacking framework that fuses graph-aware propagation with arbitrary models intended for IID data, which are ensembled and stacked in multiple layers. Our layer-wise framework leverages bagging and stacking strategies to enjoy strong generalization, in a manner which effectively mitigates label leakage and overfitting. Across a variety of graph datasets with tabular/text node features, our method achieves comparable or superior performance relative to both tabular/text and graph neural network models, as well as existing state-of-the-art hybrid strategies that combine the two.
Cite
Text
Chen et al. "Graph Neural Networks Formed via Layer-Wise Ensembles of Heterogeneous Base Models." Transactions on Machine Learning Research, 2024.Markdown
[Chen et al. "Graph Neural Networks Formed via Layer-Wise Ensembles of Heterogeneous Base Models." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/chen2024tmlr-graph/)BibTeX
@article{chen2024tmlr-graph,
title = {{Graph Neural Networks Formed via Layer-Wise Ensembles of Heterogeneous Base Models}},
author = {Chen, Jiuhai and Mueller, Jonas and Ioannidis, Vassilis N. and Goldstein, Tom and Wipf, David},
journal = {Transactions on Machine Learning Research},
year = {2024},
url = {https://mlanthology.org/tmlr/2024/chen2024tmlr-graph/}
}