New Bounds for Learning Intervals with Implications for Semi-Supervised Learning
Abstract
We study learning of initial intervals in the prediction model. We show that for each distribution \emphD over the domain, there is an algorithm \emphA_D, whose probability of a mistake in round m is at most \emph(½ + o(1))/m. We also show that the best possible bound that can be achieved in the case in which the same algorithm \emphA must be applied for all distributions \emphD is at least (^1⁄_√\emphe - o(1))^1⁄_\emphm > (^3⁄_5-o(1))^1⁄_\emphm. Informally, “knowing” the distribution \emphD enables an algorithm to reduce its error rate by a constant factor strictly greater than 1. As advocated by Ben-David et al. (2008), knowledge of \emphD can be viewed as an idealized proxy for a large number of unlabeled examples.
Cite
Text
Helmbold and Long. "New Bounds for Learning Intervals with Implications for Semi-Supervised Learning." Proceedings of the 25th Annual Conference on Learning Theory, 2012.Markdown
[Helmbold and Long. "New Bounds for Learning Intervals with Implications for Semi-Supervised Learning." Proceedings of the 25th Annual Conference on Learning Theory, 2012.](https://mlanthology.org/colt/2012/helmbold2012colt-new/)BibTeX
@inproceedings{helmbold2012colt-new,
title = {{New Bounds for Learning Intervals with Implications for Semi-Supervised Learning}},
author = {Helmbold, David P. and Long, Philip M.},
booktitle = {Proceedings of the 25th Annual Conference on Learning Theory},
year = {2012},
pages = {30.1-30.15},
volume = {23},
url = {https://mlanthology.org/colt/2012/helmbold2012colt-new/}
}