A Generalized Representer Theorem

Abstract

Wahba’s classical representer theorem states that the solutions of certain risk minimization problems involving an empirical risk term and a quadratic regularizer can be written as expansions in terms of the training examples. We generalize the theorem to a larger class of regularizers and empirical risk terms, and give a self-contained proof utilizing the feature space associated with a kernel. The result shows that a wide range of problems have optimal solutions that live in the finite dimensional span of the training examples mapped into feature space, thus enabling us to carry out kernel algorithms independent of the (potentially infinite) dimensionality of the feature space.

Cite

Text

Schölkopf et al. "A Generalized Representer Theorem." Annual Conference on Computational Learning Theory, 2001. doi:10.1007/3-540-44581-1_27

Markdown

[Schölkopf et al. "A Generalized Representer Theorem." Annual Conference on Computational Learning Theory, 2001.](https://mlanthology.org/colt/2001/scholkopf2001colt-generalized/) doi:10.1007/3-540-44581-1_27

BibTeX

@inproceedings{scholkopf2001colt-generalized,
  title     = {{A Generalized Representer Theorem}},
  author    = {Schölkopf, Bernhard and Herbrich, Ralf and Smola, Alexander J.},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {2001},
  pages     = {416-426},
  doi       = {10.1007/3-540-44581-1_27},
  url       = {https://mlanthology.org/colt/2001/scholkopf2001colt-generalized/}
}