Learning with Transformation Invariant Kernels

Abstract

This paper considers kernels invariant to translation, rotation and dilation. We show that no non-trivial positive definite (p.d.) kernels exist which are radial and dilation invariant, only conditionally positive definite (c.p.d.) ones. Accordingly, we discuss the c.p.d. case and provide some novel analysis, including an elemen- tary derivation of a c.p.d. representer theorem. On the practical side, we give a support vector machine (s.v.m.) algorithm for arbitrary c.p.d. kernels. For the thin- plate kernel this leads to a classifier with only one parameter (the amount of regu- larisation), which we demonstrate to be as effective as an s.v.m. with the Gaussian kernel, even though the Gaussian involves a second parameter (the length scale).

Cite

Text

Walder and Chapelle. "Learning with Transformation Invariant Kernels." Neural Information Processing Systems, 2007.

Markdown

[Walder and Chapelle. "Learning with Transformation Invariant Kernels." Neural Information Processing Systems, 2007.](https://mlanthology.org/neurips/2007/walder2007neurips-learning/)

BibTeX

@inproceedings{walder2007neurips-learning,
  title     = {{Learning with Transformation Invariant Kernels}},
  author    = {Walder, Christian and Chapelle, Olivier},
  booktitle = {Neural Information Processing Systems},
  year      = {2007},
  pages     = {1561-1568},
  url       = {https://mlanthology.org/neurips/2007/walder2007neurips-learning/}
}