Learning on Random Balls Is Sufficient for Estimating (Some) Graph Parameters

Abstract

Theoretical analyses for graph learning methods often assume a complete observation of the input graph. Such an assumption might not be useful for handling any-size graphs due to the scalability issues in practice. In this work, we develop a theoretical framework for graph classification problems in the partial observation setting (i.e., subgraph samplings). Equipped with insights from graph limit theory, we propose a new graph classification model that works on a randomly sampled subgraph and a novel topology to characterize the representability of the model. Our theoretical framework contributes a theoretical validation of mini-batch learning on graphs and leads to new learning-theoretic results on generalization bounds as well as size-generalizability without assumptions on the input.

Cite

Text

Maehara and Nt. "Learning on Random Balls Is Sufficient for Estimating (Some) Graph Parameters." Neural Information Processing Systems, 2021.

Markdown

[Maehara and Nt. "Learning on Random Balls Is Sufficient for Estimating (Some) Graph Parameters." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/maehara2021neurips-learning/)

BibTeX

@inproceedings{maehara2021neurips-learning,
  title     = {{Learning on Random Balls Is Sufficient for Estimating (Some) Graph Parameters}},
  author    = {Maehara, Takanori and Nt, Hoang},
  booktitle = {Neural Information Processing Systems},
  year      = {2021},
  url       = {https://mlanthology.org/neurips/2021/maehara2021neurips-learning/}
}