Graph-Laplacian PCA: Closed-Form Solution and Robustness
Abstract
Principal Component Analysis (PCA) is a widely used to learn a low-dimensional representation. In many applications, both vector data X and graph data W are available. Laplacian embedding is widely used for embedding graph data. We propose a graph-Laplacian PCA (gLPCA) to learn a low dimensional representation of X that incorporates graph structures encoded in W . This model has several advantages: (1) It is a data representation model. (2) It has a compact closed-form solution and can be efficiently computed. (3) It is capable to remove corruptions. Extensive experiments on 8 datasets show promising results on image reconstruction and significant improvement on clustering and classification.
Cite
Text
Jiang et al. "Graph-Laplacian PCA: Closed-Form Solution and Robustness." Conference on Computer Vision and Pattern Recognition, 2013. doi:10.1109/CVPR.2013.448Markdown
[Jiang et al. "Graph-Laplacian PCA: Closed-Form Solution and Robustness." Conference on Computer Vision and Pattern Recognition, 2013.](https://mlanthology.org/cvpr/2013/jiang2013cvpr-graphlaplacian/) doi:10.1109/CVPR.2013.448BibTeX
@inproceedings{jiang2013cvpr-graphlaplacian,
title = {{Graph-Laplacian PCA: Closed-Form Solution and Robustness}},
author = {Jiang, Bo and Ding, Chris and Luo, Bio and Tang, Jin},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2013},
doi = {10.1109/CVPR.2013.448},
url = {https://mlanthology.org/cvpr/2013/jiang2013cvpr-graphlaplacian/}
}