Learning R-of-K Functions by Boosting
Abstract
We investigate further improvement of boosting in the case that the target concept belongs to the class of r -of- k threshold Boolean functions, which answer “+1” if at least r of k relevant variables are positive, and answer “–1” otherwise. Given m examples of a r -of- k function and literals as base hypotheses, popular boosting algorithms (e.g., AdaBoost) construct a consistent final hypothesis by using O ( k ^2 log m ) base hypotheses. While this convergence speed is tight in general, we show that a modification of AdaBoost (confidence-rated AdaBoost [SS99] or InfoBoost [Asl00]) can make use of the property of r -of- k functions that make less error on one-side to find a consistent final hypothesis by using O ( kr log m ) hypotheses. Our result extends the previous investigation by Hatano and Warmuth [HW04] and gives more general examples where confidence-rated AdaBoost or InfoBoost has an advantage over AdaBoost.
Cite
Text
Hatano and Watanabe. "Learning R-of-K Functions by Boosting." International Conference on Algorithmic Learning Theory, 2004. doi:10.1007/978-3-540-30215-5_10Markdown
[Hatano and Watanabe. "Learning R-of-K Functions by Boosting." International Conference on Algorithmic Learning Theory, 2004.](https://mlanthology.org/alt/2004/hatano2004alt-learning/) doi:10.1007/978-3-540-30215-5_10BibTeX
@inproceedings{hatano2004alt-learning,
title = {{Learning R-of-K Functions by Boosting}},
author = {Hatano, Kohei and Watanabe, Osamu},
booktitle = {International Conference on Algorithmic Learning Theory},
year = {2004},
pages = {114-126},
doi = {10.1007/978-3-540-30215-5_10},
url = {https://mlanthology.org/alt/2004/hatano2004alt-learning/}
}