Gaussian Process Latent Variable Models for Visualisation of High Dimensional Data
Abstract
In this paper we introduce a new underlying probabilistic model for prin- cipal component analysis (PCA). Our formulation interprets PCA as a particular Gaussian process prior on a mapping from a latent space to the observed data-space. We show that if the prior’s covariance func- tion constrains the mappings to be linear the model is equivalent to PCA, we then extend the model by considering less restrictive covariance func- tions which allow non-linear mappings. This more general Gaussian pro- cess latent variable model (GPLVM) is then evaluated as an approach to the visualisation of high dimensional data for three different data-sets. Additionally our non-linear algorithm can be further kernelised leading to ‘twin kernel PCA’ in which a mapping between feature spaces occurs.
Cite
Text
Lawrence. "Gaussian Process Latent Variable Models for Visualisation of High Dimensional Data." Neural Information Processing Systems, 2003.Markdown
[Lawrence. "Gaussian Process Latent Variable Models for Visualisation of High Dimensional Data." Neural Information Processing Systems, 2003.](https://mlanthology.org/neurips/2003/lawrence2003neurips-gaussian/)BibTeX
@inproceedings{lawrence2003neurips-gaussian,
title = {{Gaussian Process Latent Variable Models for Visualisation of High Dimensional Data}},
author = {Lawrence, Neil D.},
booktitle = {Neural Information Processing Systems},
year = {2003},
pages = {329-336},
url = {https://mlanthology.org/neurips/2003/lawrence2003neurips-gaussian/}
}