The Total Variation on Hypergraphs - Learning on Hypergraphs Revisited

Abstract

Hypergraphs allow to encode higher-order relationships in data and are thus a very flexible modeling tool. Current learning methods are either based on approximations of the hypergraphs via graphs or on tensor methods which are only applicable under special conditions. In this paper we present a new learning framework on hypergraphs which fully uses the hypergraph structure. The key element is a family of regularization functionals based on the total variation on hypergraphs.

Cite

Text

Hein et al. "The Total Variation on Hypergraphs - Learning on Hypergraphs Revisited." Neural Information Processing Systems, 2013.

Markdown

[Hein et al. "The Total Variation on Hypergraphs - Learning on Hypergraphs Revisited." Neural Information Processing Systems, 2013.](https://mlanthology.org/neurips/2013/hein2013neurips-total/)

BibTeX

@inproceedings{hein2013neurips-total,
  title     = {{The Total Variation on Hypergraphs - Learning on Hypergraphs Revisited}},
  author    = {Hein, Matthias and Setzer, Simon and Jost, Leonardo and Rangapuram, Syama Sundar},
  booktitle = {Neural Information Processing Systems},
  year      = {2013},
  pages     = {2427-2435},
  url       = {https://mlanthology.org/neurips/2013/hein2013neurips-total/}
}