Optimality Implies Kernel Sum Classifiers Are Statistically Efficient

Abstract

We propose a novel combination of optimization tools with learning theory bounds in order to analyze the sample complexity of optimal kernel sum classifiers. This contrasts the typical learning theoretic results which hold for all (potentially suboptimal) classifiers. Our work also justifies assumptions made in prior work on multiple kernel learning. As a byproduct of our analysis, we also provide a new form of Rademacher complexity for hypothesis classes containing only optimal classifiers.

Cite

Text

Meyer and Honorio. "Optimality Implies Kernel Sum Classifiers Are Statistically Efficient." International Conference on Machine Learning, 2019.

Markdown

[Meyer and Honorio. "Optimality Implies Kernel Sum Classifiers Are Statistically Efficient." International Conference on Machine Learning, 2019.](https://mlanthology.org/icml/2019/meyer2019icml-optimality/)

BibTeX

@inproceedings{meyer2019icml-optimality,
  title     = {{Optimality Implies Kernel Sum Classifiers Are Statistically Efficient}},
  author    = {Meyer, Raphael and Honorio, Jean},
  booktitle = {International Conference on Machine Learning},
  year      = {2019},
  pages     = {4566-4574},
  volume    = {97},
  url       = {https://mlanthology.org/icml/2019/meyer2019icml-optimality/}
}