A Leave-One-Out Cross Validation Bound for Kernel Methods with Applications in Learning

Abstract

In this paper, we prove a general leave-one-out style crossvalidation bound for Kernel methods. We apply this bound to some classification and regression problems, and compare the results with previously known bounds. One aspect of our analysis is that the derived expected generalization bounds reflect both approximation (bias) and learning (variance) properties of the underlying kernel methods. We are thus able to demonstrate the universality of certain learning formulations.

Cite

Text

Zhang. "A Leave-One-Out Cross Validation Bound for Kernel Methods with Applications in Learning." Annual Conference on Computational Learning Theory, 2001. doi:10.1007/3-540-44581-1_28

Markdown

[Zhang. "A Leave-One-Out Cross Validation Bound for Kernel Methods with Applications in Learning." Annual Conference on Computational Learning Theory, 2001.](https://mlanthology.org/colt/2001/zhang2001colt-leave/) doi:10.1007/3-540-44581-1_28

BibTeX

@inproceedings{zhang2001colt-leave,
  title     = {{A Leave-One-Out Cross Validation Bound for Kernel Methods with Applications in Learning}},
  author    = {Zhang, Tong},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {2001},
  pages     = {427-443},
  doi       = {10.1007/3-540-44581-1_28},
  url       = {https://mlanthology.org/colt/2001/zhang2001colt-leave/}
}