Effective Dimension and Generalization of Kernel Learning

Abstract

We investigate the generalization performance of some learning prob- lems in Hilbert function Spaces. We introduce a concept of scale- sensitive effective data dimension, and show that it characterizes the con- vergence rate of the underlying learning problem. Using this concept, we can naturally extend results for parametric estimation problems in finite dimensional spaces to non-parametric kernel learning methods. We de- rive upper bounds on the generalization performance and show that the resulting convergent rates are optimal under various circumstances.

Cite

Text

Zhang. "Effective Dimension and Generalization of Kernel Learning." Neural Information Processing Systems, 2002.

Markdown

[Zhang. "Effective Dimension and Generalization of Kernel Learning." Neural Information Processing Systems, 2002.](https://mlanthology.org/neurips/2002/zhang2002neurips-effective/)

BibTeX

@inproceedings{zhang2002neurips-effective,
  title     = {{Effective Dimension and Generalization of Kernel Learning}},
  author    = {Zhang, Tong},
  booktitle = {Neural Information Processing Systems},
  year      = {2002},
  pages     = {471-478},
  url       = {https://mlanthology.org/neurips/2002/zhang2002neurips-effective/}
}