Linear Classifiers Are Nearly Optimal When Hidden Variables Have Diverse Effect
Abstract
We analyze classification problems in which data is generated by a two-tiered random process. The class is generated first, then a layer of conditionally independent hidden variables, and finally the observed variables. For sources like this, the Bayesoptimal rule for predicting the class given the values of the observed variables is a two-layer neural network. We show that, if the hidden variables have non-negligible effects on many observed variables, a linear classifier approximates the error rate of the Bayes optimal classifier up to lower order terms.
Cite
Text
Bshouty and Long. "Linear Classifiers Are Nearly Optimal When Hidden Variables Have Diverse Effect." Annual Conference on Computational Learning Theory, 2009.Markdown
[Bshouty and Long. "Linear Classifiers Are Nearly Optimal When Hidden Variables Have Diverse Effect." Annual Conference on Computational Learning Theory, 2009.](https://mlanthology.org/colt/2009/bshouty2009colt-linear/)BibTeX
@inproceedings{bshouty2009colt-linear,
title = {{Linear Classifiers Are Nearly Optimal When Hidden Variables Have Diverse Effect}},
author = {Bshouty, Nader H. and Long, Philip M.},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2009},
url = {https://mlanthology.org/colt/2009/bshouty2009colt-linear/}
}