Probably Almost Bayes Decisions
Abstract
In this paper, we investigate the problem of classifying objects which are given by feature vectors with Boolean entries. Our aim is to “(efficiently) learn probably almost optimal classifications” from examples. A classical approach in pattern recognition uses empirical estimations of the Bayesian discriminant functions for this purpose. We analyze this approach for different classes of distribution functions of Boolean features:kth order Bahadur–Lazarsfeld expansions andkth order Chow expansions. In both cases, we obtain upper bounds for the required sample size which are small polynomials in the relevant parameters and which match the lower bounds known for these classes. Moreover, the learning algorithms are efficient.
Cite
Text
Fischer et al. "Probably Almost Bayes Decisions." Annual Conference on Computational Learning Theory, 1991. doi:10.1006/inco.1996.0074Markdown
[Fischer et al. "Probably Almost Bayes Decisions." Annual Conference on Computational Learning Theory, 1991.](https://mlanthology.org/colt/1991/fischer1991colt-probably/) doi:10.1006/inco.1996.0074BibTeX
@inproceedings{fischer1991colt-probably,
title = {{Probably Almost Bayes Decisions}},
author = {Fischer, Paul and Pölt, Stefan and Simon, Hans Ulrich},
booktitle = {Annual Conference on Computational Learning Theory},
year = {1991},
pages = {88-94},
doi = {10.1006/inco.1996.0074},
url = {https://mlanthology.org/colt/1991/fischer1991colt-probably/}
}