Dual Principal Component Pursuit

Abstract

We consider the problem of outlier rejection in single subspace learning. Classical approaches work directly with a low-dimensional representation of the subspace. Our approach works with a dual representation of the subspace and hence aims to find its orthogonal complement. We pose this problem as an l1-minimization problem on the sphere and show that, under certain conditions on the distribution of the data, any global minimizer of this non-convex problem gives a vector orthogonal to the subspace. Moreover, we show that such a vector can still be found by relaxing the non-convex problem with a sequence of linear programs. Experiments on synthetic and real data show that the proposed approach, which we call Dual Principal Component Pursuit (DPCP), outperforms state-of-the art methods, especially in the case of high-dimensional subspaces.

Cite

Text

Tsakiris and Vidal. "Dual Principal Component Pursuit." IEEE/CVF International Conference on Computer Vision Workshops, 2015. doi:10.1109/ICCVW.2015.114

Markdown

[Tsakiris and Vidal. "Dual Principal Component Pursuit." IEEE/CVF International Conference on Computer Vision Workshops, 2015.](https://mlanthology.org/iccvw/2015/tsakiris2015iccvw-dual/) doi:10.1109/ICCVW.2015.114

BibTeX

@inproceedings{tsakiris2015iccvw-dual,
  title     = {{Dual Principal Component Pursuit}},
  author    = {Tsakiris, Manolis C. and Vidal, René},
  booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
  year      = {2015},
  pages     = {850-858},
  doi       = {10.1109/ICCVW.2015.114},
  url       = {https://mlanthology.org/iccvw/2015/tsakiris2015iccvw-dual/}
}