Dimensionality Reduction with Subspace Structure Preservation
Abstract
Modeling data as being sampled from a union of independent subspaces has been widely applied to a number of real world applications. However, dimensionality reduction approaches that theoretically preserve this independence assumption have not been well studied. Our key contribution is to show that $2K$ projection vectors are sufficient for the independence preservation of any $K$ class data sampled from a union of independent subspaces. It is this non-trivial observation that we use for designing our dimensionality reduction technique. In this paper, we propose a novel dimensionality reduction algorithm that theoretically preserves this structure for a given dataset. We support our theoretical analysis with empirical results on both synthetic and real world data achieving \textit{state-of-the-art} results compared to popular dimensionality reduction techniques.
Cite
Text
Arpit et al. "Dimensionality Reduction with Subspace Structure Preservation." Neural Information Processing Systems, 2014.Markdown
[Arpit et al. "Dimensionality Reduction with Subspace Structure Preservation." Neural Information Processing Systems, 2014.](https://mlanthology.org/neurips/2014/arpit2014neurips-dimensionality/)BibTeX
@inproceedings{arpit2014neurips-dimensionality,
title = {{Dimensionality Reduction with Subspace Structure Preservation}},
author = {Arpit, Devansh and Nwogu, Ifeoma and Govindaraju, Venu},
booktitle = {Neural Information Processing Systems},
year = {2014},
pages = {712-720},
url = {https://mlanthology.org/neurips/2014/arpit2014neurips-dimensionality/}
}