Theory of Matching Pursuit
Abstract
We analyse matching pursuit for kernel principal components analysis by proving that the sparse subspace it produces is a sample compression scheme. We show that this bound is tighter than the KPCA bound of Shawe-Taylor et al swck-05 and highly predictive of the size of the subspace needed to capture most of the variance in the data. We analyse a second matching pursuit algorithm called kernel matching pursuit (KMP) which does not correspond to a sample compression scheme. However, we give a novel bound that views the choice of subspace of the KMP algorithm as a compression scheme and hence provide a VC bound to upper bound its future loss. Finally we describe how the same bound can be applied to other matching pursuit related algorithms.
Cite
Text
Hussain and Shawe-taylor. "Theory of Matching Pursuit." Neural Information Processing Systems, 2008.Markdown
[Hussain and Shawe-taylor. "Theory of Matching Pursuit." Neural Information Processing Systems, 2008.](https://mlanthology.org/neurips/2008/hussain2008neurips-theory/)BibTeX
@inproceedings{hussain2008neurips-theory,
title = {{Theory of Matching Pursuit}},
author = {Hussain, Zakria and Shawe-taylor, John S.},
booktitle = {Neural Information Processing Systems},
year = {2008},
pages = {721-728},
url = {https://mlanthology.org/neurips/2008/hussain2008neurips-theory/}
}