Out-of-Sample Extension of Graph Adjacency Spectral Embedding

Abstract

Many popular dimensionality reduction procedures have out-of-sample extensions, which allow a practitioner to apply a learned embedding to observations not seen in the initial training sample. In this work, we consider the problem of obtaining an out-of-sample extension for the adjacency spectral embedding, a procedure for embedding the vertices of a graph into Euclidean space. We present two different approaches to this problem, one based on a least-squares objective and the other based on a maximum-likelihood formulation. We show that if the graph of interest is drawn according to a certain latent position model called a random dot product graph, then both of these out-of-sample extensions estimate the true latent position of the out-of-sample vertex with the same error rate. Further, we prove a central limit theorem for the least-squares-based extension, showing that the estimate is asymptotically normal about the truth in the large-graph limit.

Cite

Text

Levin et al. "Out-of-Sample Extension of Graph Adjacency Spectral Embedding." International Conference on Machine Learning, 2018.

Markdown

[Levin et al. "Out-of-Sample Extension of Graph Adjacency Spectral Embedding." International Conference on Machine Learning, 2018.](https://mlanthology.org/icml/2018/levin2018icml-outofsample/)

BibTeX

@inproceedings{levin2018icml-outofsample,
  title     = {{Out-of-Sample Extension of Graph Adjacency Spectral Embedding}},
  author    = {Levin, Keith and Roosta, Fred and Mahoney, Michael and Priebe, Carey},
  booktitle = {International Conference on Machine Learning},
  year      = {2018},
  pages     = {2975-2984},
  volume    = {80},
  url       = {https://mlanthology.org/icml/2018/levin2018icml-outofsample/}
}