Kernel Matching Pursuit
Abstract
Matching Pursuit algorithms learn a function that is a weighted sum of basis functions, by sequentially appending functions to an initially empty basis, to approximate a target function in the least-squares sense. We show how matching pursuit can be extended to use non-squared error loss functions, and how it can be used to build kernel-based solutions to machine learning problems, while keeping control of the sparsity of the solution. We present a version of the algorithm that makes an optimal choice of both the next basis and the weights of all the previously chosen bases. Finally, links to boosting algorithms and RBF training procedures, as well as an extensive experimental comparison with SVMs for classification are given, showing comparable results with typically much sparser models.
Cite
Text
Vincent and Bengio. "Kernel Matching Pursuit." Machine Learning, 2002. doi:10.1023/A:1013955821559Markdown
[Vincent and Bengio. "Kernel Matching Pursuit." Machine Learning, 2002.](https://mlanthology.org/mlj/2002/vincent2002mlj-kernel/) doi:10.1023/A:1013955821559BibTeX
@article{vincent2002mlj-kernel,
title = {{Kernel Matching Pursuit}},
author = {Vincent, Pascal and Bengio, Yoshua},
journal = {Machine Learning},
year = {2002},
pages = {165-187},
doi = {10.1023/A:1013955821559},
volume = {48},
url = {https://mlanthology.org/mlj/2002/vincent2002mlj-kernel/}
}