Fast Learning Rate of Multiple Kernel Learning: Trade-Off Between Sparsity and Smoothness
Abstract
We investigate the learning rate of multiple kernel leaning (MKL) with L1 and elastic-net regularizations. The elastic-net regularization is a composition of an L1-regularizer for inducing the sparsity and an L2-regularizer for controlling the smoothness. We focus on a sparse setting where the total number of kernels is large but the number of non-zero components of the ground truth is relatively small, and show sharper convergence rates than the learning rates ever shown for both L1 and elastic-net regularizations. Our analysis shows there appears a trade-off between the sparsity and the smoothness when it comes to selecting which of L1 and elastic-net regularizations to use; if the ground truth is smooth, the elastic-net regularization is preferred, otherwise the L1 regularization is preferred.
Cite
Text
Suzuki and Sugiyama. "Fast Learning Rate of Multiple Kernel Learning: Trade-Off Between Sparsity and Smoothness." Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, 2012.Markdown
[Suzuki and Sugiyama. "Fast Learning Rate of Multiple Kernel Learning: Trade-Off Between Sparsity and Smoothness." Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, 2012.](https://mlanthology.org/aistats/2012/suzuki2012aistats-fast/)BibTeX
@inproceedings{suzuki2012aistats-fast,
title = {{Fast Learning Rate of Multiple Kernel Learning: Trade-Off Between Sparsity and Smoothness}},
author = {Suzuki, Taiji and Sugiyama, Masashi},
booktitle = {Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics},
year = {2012},
pages = {1152-1183},
volume = {22},
url = {https://mlanthology.org/aistats/2012/suzuki2012aistats-fast/}
}