Sparse Kernel Partial Least Squares Regression

Abstract

Partial Least Squares Regression (PLS) and its kernel version (KPLS) have become competitive regression approaches. KPLS performs as well as or better than support vector regression (SVR) for moderately-sized problems with the advantages of simple implementation, less training cost, and easier tuning of parameters. Unlike SVR, KPLS requires manipulation of the full kernel matrix and the resulting regression function requires the full training data. In this paper we rigorously derive a sparse KPLS algorithm. The underlying KPLS algorithm is modified to maintain sparsity in all steps of the algorithm. The resulting ν -KPLS algorithm explicitly models centering and bias rather than using kernel centering. An ε -insensitive loss function is used to produce sparse solutions in the dual space. The final regression function for the ν -KPLS algorithm only requires a relatively small set of support vectors.

Cite

Text

Momma and Bennett. "Sparse Kernel Partial Least Squares Regression." Annual Conference on Computational Learning Theory, 2003. doi:10.1007/978-3-540-45167-9_17

Markdown

[Momma and Bennett. "Sparse Kernel Partial Least Squares Regression." Annual Conference on Computational Learning Theory, 2003.](https://mlanthology.org/colt/2003/momma2003colt-sparse/) doi:10.1007/978-3-540-45167-9_17

BibTeX

@inproceedings{momma2003colt-sparse,
  title     = {{Sparse Kernel Partial Least Squares Regression}},
  author    = {Momma, Michinari and Bennett, Kristin P.},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {2003},
  pages     = {216-230},
  doi       = {10.1007/978-3-540-45167-9_17},
  url       = {https://mlanthology.org/colt/2003/momma2003colt-sparse/}
}