Improved Loss Bounds for Multiple Kernel Learning
Abstract
We propose two new generalization error bounds for multiple kernel learning (MKL). First, using the bound of Srebro and Ben-David (2006) as a starting point, we derive a new version which uses a simple counting argument for the choice of kernels in order to generate a tighter bound when 1-norm regularization (sparsity) is imposed in the kernel learning problem. The second bound is a Rademacher complexity bound which is additive in the (logarithmic) kernel complexity and margin term. This dependency is superior to all previously published Rademacher bounds for learning a convex combination of kernels, including the recent bound of Cortes et al. (2010), which exhibits a multiplicative interaction. We illustrate the tightness of our bounds with simulations.
Cite
Text
Hussain and Shawe–Taylor. "Improved Loss Bounds for Multiple Kernel Learning." Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 2011.Markdown
[Hussain and Shawe–Taylor. "Improved Loss Bounds for Multiple Kernel Learning." Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 2011.](https://mlanthology.org/aistats/2011/hussain2011aistats-improved/)BibTeX
@inproceedings{hussain2011aistats-improved,
title = {{Improved Loss Bounds for Multiple Kernel Learning}},
author = {Hussain, Zakria and Shawe–Taylor, John},
booktitle = {Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics},
year = {2011},
pages = {370-377},
volume = {15},
url = {https://mlanthology.org/aistats/2011/hussain2011aistats-improved/}
}