Gaussian Process Models for Link Analysis and Transfer Learning

Abstract

In this paper we develop a Gaussian process (GP) framework to model a collection of reciprocal random variables defined on the \emph{edges} of a network. We show how to construct GP priors, i.e.,~covariance functions, on the edges of directed, undirected, and bipartite graphs. The model suggests an intimate connection between \emph{link prediction} and \emph{transfer learning}, which were traditionally considered two separate research topics. Though a straightforward GP inference has a very high complexity, we develop an efficient learning algorithm that can handle a large number of observations. The experimental results on several real-world data sets verify superior learning capacity.

Cite

Text

Yu and Chu. "Gaussian Process Models for Link Analysis and Transfer Learning." Neural Information Processing Systems, 2007.

Markdown

[Yu and Chu. "Gaussian Process Models for Link Analysis and Transfer Learning." Neural Information Processing Systems, 2007.](https://mlanthology.org/neurips/2007/yu2007neurips-gaussian/)

BibTeX

@inproceedings{yu2007neurips-gaussian,
  title     = {{Gaussian Process Models for Link Analysis and Transfer Learning}},
  author    = {Yu, Kai and Chu, Wei},
  booktitle = {Neural Information Processing Systems},
  year      = {2007},
  pages     = {1657-1664},
  url       = {https://mlanthology.org/neurips/2007/yu2007neurips-gaussian/}
}