On the Convergence Rate of Lp-Norm Multiple Kernel Learning
Abstract
We derive an upper bound on the local Rademacher complexity of lp-norm multiple kernel learning, which yields a tighter excess risk bound than global approaches. Previous local approaches analyzed the case p=1 only while our analysis covers all cases 1≤p≤∞, assuming the different feature mappings corresponding to the different kernels to be uncorrelated. We also show a lower bound that shows that the bound is tight, and derive consequences regarding excess loss, namely fast convergence rates of the order O(n-α/1+α), where α is the minimum eigenvalue decay rate of the individual kernels.
Cite
Text
Kloft and Blanchard. "On the Convergence Rate of Lp-Norm Multiple Kernel Learning." Journal of Machine Learning Research, 2012.Markdown
[Kloft and Blanchard. "On the Convergence Rate of Lp-Norm Multiple Kernel Learning." Journal of Machine Learning Research, 2012.](https://mlanthology.org/jmlr/2012/kloft2012jmlr-convergence/)BibTeX
@article{kloft2012jmlr-convergence,
title = {{On the Convergence Rate of Lp-Norm Multiple Kernel Learning}},
author = {Kloft, Marius and Blanchard, Gilles},
journal = {Journal of Machine Learning Research},
year = {2012},
pages = {2465-2502},
volume = {13},
url = {https://mlanthology.org/jmlr/2012/kloft2012jmlr-convergence/}
}