Concurrent Subspaces Analysis
Abstract
A representative subspace is significant for image analysis, while the corresponding techniques often suffer from the curse of dimensionality dilemma. In this paper, we propose a new algorithm, called Concurrent Subspaces Analysis (CSA), to derive representative subspaces by encoding image objects as 2 $^{nd}$ or even higher order tensors. In CSA, an original higher dimensional tensor is transformed into a lower dimensional one using multiple concurrent subspaces that characterize the most representative information of different dimensions, respectively. Moreover, an efficient procedure is provided to learn these subspaces in an iterative manner. As analyzed in this paper, each sub-step of CSA takes the column vectors of the matrices, which are acquired from the k-mode unfolding of the tensors, as the new objects to be analyzed, thus the curse of dimensionality dilemma can be effectively avoided. The extensive experiments on the 3$^{rd}$ order tensor data, simulated video sequences and Gabor filtered digital number image database show that CSA outper-forms Principal Component Analysis in terms of both reconstruction and classification capability.
Cite
Text
Xu et al. "Concurrent Subspaces Analysis." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2005. doi:10.1109/CVPR.2005.107Markdown
[Xu et al. "Concurrent Subspaces Analysis." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2005.](https://mlanthology.org/cvpr/2005/xu2005cvpr-concurrent/) doi:10.1109/CVPR.2005.107BibTeX
@inproceedings{xu2005cvpr-concurrent,
title = {{Concurrent Subspaces Analysis}},
author = {Xu, Dong and Yan, Shuicheng and Zhang, Lei and Zhang, HongJiang and Liu, Zhengkai and Shum, Heung-Yeung},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2005},
pages = {203-208},
doi = {10.1109/CVPR.2005.107},
url = {https://mlanthology.org/cvpr/2005/xu2005cvpr-concurrent/}
}