Learning Kernels Using Local Rademacher Complexity
Abstract
We use the notion of local Rademacher complexity to design new algorithms for learning kernels. Our algorithms thereby benefit from the sharper learning bounds based on that notion which, under certain general conditions, guarantee a faster convergence rate. We devise two new learning kernel algorithms: one based on a convex optimization problem for which we give an efficient solution using existing learning kernel techniques, and another one that can be formulated as a DC-programming problem for which we describe a solution in detail. We also report the results of experiments with both algorithms in both binary and multi-class classification tasks.
Cite
Text
Cortes et al. "Learning Kernels Using Local Rademacher Complexity." Neural Information Processing Systems, 2013.Markdown
[Cortes et al. "Learning Kernels Using Local Rademacher Complexity." Neural Information Processing Systems, 2013.](https://mlanthology.org/neurips/2013/cortes2013neurips-learning/)BibTeX
@inproceedings{cortes2013neurips-learning,
title = {{Learning Kernels Using Local Rademacher Complexity}},
author = {Cortes, Corinna and Kloft, Marius and Mohri, Mehryar},
booktitle = {Neural Information Processing Systems},
year = {2013},
pages = {2760-2768},
url = {https://mlanthology.org/neurips/2013/cortes2013neurips-learning/}
}