Rethinking PCA Through Duality
Abstract
Motivated by the recently shown connection between self-attention and (kernel) principal component analysis (PCA), we revisit the fundamentals of PCA. Using the difference-of-convex (DC) framework, we present several novel formulations and provide new theoretical insights. In particular, we show the kernelizability and out-of-sample applicability for a PCA-like family of problems. Moreover, we uncover that simultaneous iteration, which is connected to the classical QR algorithm, is an instance of the difference-of-convex algorithm (DCA), offering an optimization perspective on this longstanding method. Further, we describe new algorithms for PCA and empirically compare them with state-of-the-art methods. Lastly, we introduce a kernelizable dual formulation for a robust variant of PCA that minimizes the $l_1$-deviation of the reconstruction errors.
Cite
Text
Quan et al. "Rethinking PCA Through Duality." Advances in Neural Information Processing Systems, 2025.Markdown
[Quan et al. "Rethinking PCA Through Duality." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/quan2025neurips-rethinking/)BibTeX
@inproceedings{quan2025neurips-rethinking,
title = {{Rethinking PCA Through Duality}},
author = {Quan, Jan and Suykens, Johan and Patrinos, Panagiotis},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/quan2025neurips-rethinking/}
}