GMV: A Unified and Efficient Graph Multi-View Learning Framework
Abstract
Graph Neural Networks (GNNs) are pivotal in graph classification but often struggle with generalization and overfitting. We introduce a unified and efficient Graph Multi-View (GMV) learning framework that integrates multi-view learning into GNNs to enhance robustness and efficiency. Leveraging the lottery ticket hypothesis, GMV activates diverse sub-networks within a single GNN through a novel training pipeline, which includes mixed-view generation, and multi-view decomposition and learning. This approach simultaneously broadens "views" from the data, model, and optimization perspectives during training to enhance the generalization capabilities of GNNs. During inference, GMV only incorporates additional prediction heads into standard GNNs, thereby achieving multi-view learning at minimal cost. Our experiments demonstrate that GMV surpasses other augmentation and ensemble techniques for GNNs and Graph Transformers across various graph classification scenarios.
Cite
Text
Zhu et al. "GMV: A Unified and Efficient Graph Multi-View Learning Framework." Advances in Neural Information Processing Systems, 2025.Markdown
[Zhu et al. "GMV: A Unified and Efficient Graph Multi-View Learning Framework." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/zhu2025neurips-gmv/)BibTeX
@inproceedings{zhu2025neurips-gmv,
title = {{GMV: A Unified and Efficient Graph Multi-View Learning Framework}},
author = {Zhu, Qipeng and Chen, Jie and Pu, Jian and Zhang, Junping},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/zhu2025neurips-gmv/}
}