GNN-FiLM: Graph Neural Networks with Feature-Wise Linear Modulation

Abstract

This paper presents a new Graph Neural Network (GNN) type using feature-wise linear modulation (FiLM). Many standard GNN variants propagate information along the edges of a graph by computing messages based only on the representation of the source of each edge. In GNN-FiLM, the representation of the target node of an edge is used to compute a transformation that can be applied to all incoming messages, allowing feature-wise modulation of the passed information. Different GNN architectures are compared in extensive experiments on three tasks from the literature, using re-implementations of many baseline methods. Hyperparameters for all methods were found using extensive search, yielding somewhat surprising results: differences between state of the art models are much smaller than reported in the literature and well-known simple baselines that are often not compared to perform better than recently proposed GNN variants. Nonetheless, GNN-FiLM outperforms these methods on a regression task on molecular graphs and performs competitively on other tasks.

Cite

Text

Brockschmidt. "GNN-FiLM: Graph Neural Networks with Feature-Wise Linear Modulation." International Conference on Machine Learning, 2020.

Markdown

[Brockschmidt. "GNN-FiLM: Graph Neural Networks with Feature-Wise Linear Modulation." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/brockschmidt2020icml-gnnfilm/)

BibTeX

@inproceedings{brockschmidt2020icml-gnnfilm,
  title     = {{GNN-FiLM: Graph Neural Networks with Feature-Wise Linear Modulation}},
  author    = {Brockschmidt, Marc},
  booktitle = {International Conference on Machine Learning},
  year      = {2020},
  pages     = {1144-1152},
  volume    = {119},
  url       = {https://mlanthology.org/icml/2020/brockschmidt2020icml-gnnfilm/}
}