AgFlow: Fast Model Selection of Penalized PCA via Implicit Regularization Effects of Gradient Flow

Abstract

Principal component analysis (PCA) has been widely used as an effective technique for feature extraction and dimension reduction. In the High Dimension Low Sample Size setting, one may prefer modified principal components, with penalized loadings, and automated penalty selection by implementing model selection among these different models with varying penalties. The earlier work (Zou et al. in J Comput Graph Stat 15(2):265–286, 2006; Gaynanova et al. in J Comput Graph Stat 26(2):379–387, 2017) has proposed penalized PCA, indicating the feasibility of model selection in ℓ2\documentclass[12pt]minimal \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}-69pt \begin{document}$\ell _2$\end{document}-penalized PCA through the solution path of Ridge regression, however, it is extremely time-consuming because of the intensive calculation of matrix inverse. In this paper, we propose a fast model selection method for penalized PCA, named approximated gradient flow (AgFlow), which lowers the computation complexity through incorporating the implicit regularization effect introduced by (stochastic) gradient flow (Ali et al. in: The 22nd international conference on artificial intelligence and statistics, pp 1370–1378, 2019; Ali et al. in: International conference on machine learning, 2020) and obtains the complete solution path of ℓ2\documentclass[12pt]minimal \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}-69pt \begin{document}$\ell _2$\end{document}-penalized PCA under varying ℓ2\documentclass[12pt]minimal \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}-69pt \begin{document}$\ell _2$\end{document}-regularization. We perform extensive experiments on real-world datasets. AgFlow outperforms existing methods (Oja and Karhunen in J Math Anal Appl 106(1):69–84, 1985; Hardt and Price in: Advances in neural information processing systems, pp 2861–2869, 2014; Shamir in: International conference on machine learning, pp 144–152, PMLR, 2015; and the vanilla Ridge estimators) in terms of computation costs.

Cite

Text

Jiang et al. "AgFlow: Fast Model Selection of Penalized PCA via Implicit Regularization Effects of Gradient Flow." Machine Learning, 2021. doi:10.1007/S10994-021-06025-3

Markdown

[Jiang et al. "AgFlow: Fast Model Selection of Penalized PCA via Implicit Regularization Effects of Gradient Flow." Machine Learning, 2021.](https://mlanthology.org/mlj/2021/jiang2021mlj-agflow/) doi:10.1007/S10994-021-06025-3

BibTeX

@article{jiang2021mlj-agflow,
  title     = {{AgFlow: Fast Model Selection of Penalized PCA via Implicit Regularization Effects of Gradient Flow}},
  author    = {Jiang, Haiyan and Xiong, Haoyi and Wu, Dongrui and Liu, Ji and Dou, Dejing},
  journal   = {Machine Learning},
  year      = {2021},
  pages     = {2131-2150},
  doi       = {10.1007/S10994-021-06025-3},
  volume    = {110},
  url       = {https://mlanthology.org/mlj/2021/jiang2021mlj-agflow/}
}