Training One-Dimensional Graph Neural Networks Is NP-Hard

Abstract

We initiate the study of the computational complexity of training graph neural networks (GNNs). We consider the classical node classification setting; there, the intractability of training multidimensonal GNNs immediately follows from known lower bounds for training classical neural networks (and holds even for trivial GNNs). However, one-dimensional GNNs form a crucial case of interest: the computational complexity of training such networks depends on both the graphical structure of the network and the properties of the involved activation and aggregation functions. As our main result, we establish the NP-hardness of training ReLU-activated one-dimensional GNNs via a highly non-trivial reduction. We complement this result with algorithmic upper bounds for the training problem in the ReLU-activated and linearly-activated settings.

Cite

Text

Ganian et al. "Training One-Dimensional Graph Neural Networks Is NP-Hard." International Conference on Learning Representations, 2025.

Markdown

[Ganian et al. "Training One-Dimensional Graph Neural Networks Is NP-Hard." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/ganian2025iclr-training/)

BibTeX

@inproceedings{ganian2025iclr-training,
  title     = {{Training One-Dimensional Graph Neural Networks Is NP-Hard}},
  author    = {Ganian, Robert and Rocton, Mathis and Wietheger, Simon},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/ganian2025iclr-training/}
}