Kernelization of Matrix Updates, When and How?

Abstract

We define what it means for a learning algorithm to be kernelizable in the case when the instances are vectors, asymmetric matrices and symmetric matrices, respectively. We can characterize kernelizability in terms of an invariance of the algorithm to certain orthogonal transformations. If we assume that the algorithm’s action relies on a linear prediction, then we can show that in each case the linear parameter vector must be a certain linear combination of the instances. We give a number of examples of how to apply our methods. In particular we show how to kernelize multiplicative updates for symmetric instance matrices.

Cite

Text

Warmuth et al. "Kernelization of Matrix Updates, When and How?." International Conference on Algorithmic Learning Theory, 2012. doi:10.1007/978-3-642-34106-9_28

Markdown

[Warmuth et al. "Kernelization of Matrix Updates, When and How?." International Conference on Algorithmic Learning Theory, 2012.](https://mlanthology.org/alt/2012/warmuth2012alt-kernelization/) doi:10.1007/978-3-642-34106-9_28

BibTeX

@inproceedings{warmuth2012alt-kernelization,
  title     = {{Kernelization of Matrix Updates, When and How?}},
  author    = {Warmuth, Manfred K. and Kotlowski, Wojciech and Zhou, Shuisheng},
  booktitle = {International Conference on Algorithmic Learning Theory},
  year      = {2012},
  pages     = {350-364},
  doi       = {10.1007/978-3-642-34106-9_28},
  url       = {https://mlanthology.org/alt/2012/warmuth2012alt-kernelization/}
}