HEAT: Hyperedge Attention Networks

Abstract

Learning from structured data is a core machine learning task. Commonly, such data is represented as graphs, which normally only consider (typed) binary relationships between pairs of nodes. This is a substantial limitation for many domains with highly-structured data. One important such domain is source code, where hypergraph-based representations can better capture the semantically rich and structured nature of code. In this work, we present HEAT, a neural model capable of representing typed and qualified hypergraphs, where each hyperedge explicitly qualifies how participating nodes contribute. It can be viewed as a generalization of both message passing neural networks and Transformers. We evaluate HEAT on knowledge base completion and on bug detection and repair using a novel hypergraph representation of programs. In both settings, it outperforms strong baselines, indicating its power and generality.

Cite

Text

Georgiev et al. "HEAT: Hyperedge Attention Networks." Transactions on Machine Learning Research, 2022.

Markdown

[Georgiev et al. "HEAT: Hyperedge Attention Networks." Transactions on Machine Learning Research, 2022.](https://mlanthology.org/tmlr/2022/georgiev2022tmlr-heat/)

BibTeX

@article{georgiev2022tmlr-heat,
  title     = {{HEAT: Hyperedge Attention Networks}},
  author    = {Georgiev, Dobrik Georgiev and Brockschmidt, Marc and Allamanis, Miltiadis},
  journal   = {Transactions on Machine Learning Research},
  year      = {2022},
  url       = {https://mlanthology.org/tmlr/2022/georgiev2022tmlr-heat/}
}