Bayes and Tukey Meet at the Center Point
Abstract
The Bayes classifier achieves the minimal error rate by constructing a weighted majority over all concepts in the concept class. The Bayes Point [1] uses the single concept in the class which has the minimal error. This way, the Bayes Point avoids some of the deficiencies of the Bayes classifier. We prove a bound on the generalization error for Bayes Point Machines when learning linear classifiers, and show that it is at most ~1.71 times the generalization error of the Bayes classifier, independent of the input dimension and length of training. We show that when learning linear classifiers, the Bayes Point is almost identical to the Tukey Median [2] and Center Point [3]. We extend these definitions beyond linear classifiers and define the Bayes Depth of a classifier. We prove generalization bound in terms of this new definition. Finally we provide a new concentration of measure inequality for multivariate random variables to the Tukey Median .
Cite
Text
Gilad-Bachrach et al. "Bayes and Tukey Meet at the Center Point." Annual Conference on Computational Learning Theory, 2004. doi:10.1007/978-3-540-27819-1_38Markdown
[Gilad-Bachrach et al. "Bayes and Tukey Meet at the Center Point." Annual Conference on Computational Learning Theory, 2004.](https://mlanthology.org/colt/2004/giladbachrach2004colt-bayes/) doi:10.1007/978-3-540-27819-1_38BibTeX
@inproceedings{giladbachrach2004colt-bayes,
title = {{Bayes and Tukey Meet at the Center Point}},
author = {Gilad-Bachrach, Ran and Navot, Amir and Tishby, Naftali},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2004},
pages = {549-563},
doi = {10.1007/978-3-540-27819-1_38},
url = {https://mlanthology.org/colt/2004/giladbachrach2004colt-bayes/}
}