A Risk Minimization Principle for a Class of Parzen Estimators

Abstract

This paper explores the use of a Maximal Average Margin (MAM) optimality principle for the design of learning algorithms. It is shown that the application of this risk minimization principle results in a class of (computationally) simple learning machines similar to the classical Parzen window classifier. A direct relation with the Rademacher complexities is established, as such facilitating analysis and providing a notion of certainty of prediction. This analysis is related to Support Vector Machines by means of a margin transformation. The power of the MAM principle is illustrated further by application to ordinal regression tasks, resulting in an $O(n)$ algorithm able to process large datasets in reasonable time.

Cite

Text

Pelckmans et al. "A Risk Minimization Principle for a Class of Parzen Estimators." Neural Information Processing Systems, 2007.

Markdown

[Pelckmans et al. "A Risk Minimization Principle for a Class of Parzen Estimators." Neural Information Processing Systems, 2007.](https://mlanthology.org/neurips/2007/pelckmans2007neurips-risk/)

BibTeX

@inproceedings{pelckmans2007neurips-risk,
  title     = {{A Risk Minimization Principle for a Class of Parzen Estimators}},
  author    = {Pelckmans, Kristiaan and Suykens, Johan and Moor, Bart D.},
  booktitle = {Neural Information Processing Systems},
  year      = {2007},
  pages     = {1137-1144},
  url       = {https://mlanthology.org/neurips/2007/pelckmans2007neurips-risk/}
}