On the Consistency of Graph-Based Bayesian Semi-Supervised Learning and the Scalability of Sampling Algorithms

Abstract

This paper considers a Bayesian approach to graph-based semi-supervised learning. We show that if the graph parameters are suitably scaled, the graph-posteriors converge to a continuum limit as the size of the unlabeled data set grows. This consistency result has profound algorithmic implications: we prove that when consistency holds, carefully designed Markov chain Monte Carlo algorithms have a uniform spectral gap, independent of the number of unlabeled inputs. Numerical experiments illustrate and complement the theory.

Cite

Text

Trillos et al. "On the Consistency of Graph-Based Bayesian Semi-Supervised Learning and the Scalability of Sampling Algorithms." Journal of Machine Learning Research, 2020.

Markdown

[Trillos et al. "On the Consistency of Graph-Based Bayesian Semi-Supervised Learning and the Scalability of Sampling Algorithms." Journal of Machine Learning Research, 2020.](https://mlanthology.org/jmlr/2020/trillos2020jmlr-consistency/)

BibTeX

@article{trillos2020jmlr-consistency,
  title     = {{On the Consistency of Graph-Based Bayesian Semi-Supervised Learning and the Scalability of Sampling Algorithms}},
  author    = {Trillos, Nicolas Garcia and Kaplan, Zachary and Samakhoana, Thabo and Sanz-Alonso, Daniel},
  journal   = {Journal of Machine Learning Research},
  year      = {2020},
  pages     = {1-47},
  volume    = {21},
  url       = {https://mlanthology.org/jmlr/2020/trillos2020jmlr-consistency/}
}