Convex Multi-View Subspace Learning

Abstract

Subspace learning seeks a low dimensional representation of data that enables accurate reconstruction. However, in many applications, data is obtained from multiple sources rather than a single source (e.g. an object might be viewed by cameras at different angles, or a document might consist of text and images). The conditional independence of separate sources imposes constraints on their shared latent representation, which, if respected, can improve the quality of the learned low dimensional representation. In this paper, we present a convex formulation of multi-view subspace learning that enforces conditional independence while reducing dimensionality. For this formulation, we develop an efficient algorithm that recovers an optimal data reconstruction by exploiting an implicit convex regularizer, then recovers the corresponding latent representation and reconstruction model, jointly and optimally. Experiments illustrate that the proposed method produces high quality results.

Cite

Text

White et al. "Convex Multi-View Subspace Learning." Neural Information Processing Systems, 2012.

Markdown

[White et al. "Convex Multi-View Subspace Learning." Neural Information Processing Systems, 2012.](https://mlanthology.org/neurips/2012/white2012neurips-convex/)

BibTeX

@inproceedings{white2012neurips-convex,
  title     = {{Convex Multi-View Subspace Learning}},
  author    = {White, Martha and Zhang, Xinhua and Schuurmans, Dale and Yu, Yao-liang},
  booktitle = {Neural Information Processing Systems},
  year      = {2012},
  pages     = {1673-1681},
  url       = {https://mlanthology.org/neurips/2012/white2012neurips-convex/}
}