Generalization Bounds for Learning the Kernel Problem
Abstract
In this paper we develop a novel probabilistic generalization bound for learning the kernel problem. First, we show that the generalization analysis of the regularized kernel learning system reduces to investigation of the suprema of the Rademacher chaos process of order two over candidate kernels, which we refer to as Rademacher chaos complexity. Next, we show how to estimate the empirical Rademacher chaos complexity by well-established metric entropy integrals and pseudo-dimension of the set of candidate kernels. Our new methodology mainly depends on the principal theory of U- processes. Finally, we establish satisfactory excess generalization bounds and misclassification error rates for learning Gaussian kernels and general radial basis kernels.
Cite
Text
Ying and Campbell. "Generalization Bounds for Learning the Kernel Problem." Annual Conference on Computational Learning Theory, 2009.Markdown
[Ying and Campbell. "Generalization Bounds for Learning the Kernel Problem." Annual Conference on Computational Learning Theory, 2009.](https://mlanthology.org/colt/2009/ying2009colt-generalization/)BibTeX
@inproceedings{ying2009colt-generalization,
title = {{Generalization Bounds for Learning the Kernel Problem}},
author = {Ying, Yiming and Campbell, Colin},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2009},
url = {https://mlanthology.org/colt/2009/ying2009colt-generalization/}
}