Re-Revisiting Learning on Hypergraphs: Confidence Interval and Subgradient Method
Abstract
We revisit semi-supervised learning on hypergraphs. Same as previous approaches, our method uses a convex program whose objective function is not everywhere differentiable. We exploit the non-uniqueness of the optimal solutions, and consider confidence intervals which give the exact ranges that unlabeled vertices take in any optimal solution. Moreover, we give a much simpler approach for solving the convex program based on the subgradient method. Our experiments on real-world datasets confirm that our confidence interval approach on hypergraphs outperforms existing methods, and our sub-gradient method gives faster running times when the number of vertices is much larger than the number of edges.
Cite
Text
Zhang et al. "Re-Revisiting Learning on Hypergraphs: Confidence Interval and Subgradient Method." International Conference on Machine Learning, 2017.Markdown
[Zhang et al. "Re-Revisiting Learning on Hypergraphs: Confidence Interval and Subgradient Method." International Conference on Machine Learning, 2017.](https://mlanthology.org/icml/2017/zhang2017icml-rerevisiting/)BibTeX
@inproceedings{zhang2017icml-rerevisiting,
title = {{Re-Revisiting Learning on Hypergraphs: Confidence Interval and Subgradient Method}},
author = {Zhang, Chenzi and Hu, Shuguang and Tang, Zhihao Gavin and Chan, T-H. Hubert},
booktitle = {International Conference on Machine Learning},
year = {2017},
pages = {4026-4034},
volume = {70},
url = {https://mlanthology.org/icml/2017/zhang2017icml-rerevisiting/}
}