Smooth Optimization for Effective Multiple Kernel Learning
Abstract
Multiple Kernel Learning (MKL) can be formulated as a convex-concave minmax optimization problem, whose saddle point corresponds to the optimal solution to MKL. Most MKL methods employ the L1-norm simplex constraints on the combination weights of kernels, which therefore involves optimization of a non-smooth function of the kernel weights. These methods usually divide the optimization into two cycles: one cycle deals with the optimization on the kernel combination weights, and the other cycle updates the parameters of SVM. Despite the success of their efficiency, they tend to discard informative complementary kernels. To improve accuracy, we introduce smoothness to the optimization procedure. Furthermore, we transform the optimization into a single smooth convex optimization problem and employ the Nesterov’s method to efficiently solve the optimization problem. Experiments on benchmark data sets demonstrate that the proposed algorithm clearly improves current MKL methods in a number scenarios.
Cite
Text
Xu et al. "Smooth Optimization for Effective Multiple Kernel Learning." AAAI Conference on Artificial Intelligence, 2010. doi:10.1609/AAAI.V24I1.7675Markdown
[Xu et al. "Smooth Optimization for Effective Multiple Kernel Learning." AAAI Conference on Artificial Intelligence, 2010.](https://mlanthology.org/aaai/2010/xu2010aaai-smooth/) doi:10.1609/AAAI.V24I1.7675BibTeX
@inproceedings{xu2010aaai-smooth,
title = {{Smooth Optimization for Effective Multiple Kernel Learning}},
author = {Xu, Zenglin and Jin, Rong and Zhu, Shenghuo and Lyu, Michael R. and King, Irwin},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2010},
pages = {637-642},
doi = {10.1609/AAAI.V24I1.7675},
url = {https://mlanthology.org/aaai/2010/xu2010aaai-smooth/}
}