Towards Tight Bounds for Rule Learning
Abstract
While there is a lot of empirical evidence showing that traditional rulelearning approaches work well in practice, it is nearly impossible to deriveanalytical results about their predictive accuracy. In this paper, weinvestigate rule-learning from a theoretical perspective. We show that theapplication of McAllester's PAC-Bayesian bound to rule learning yields apractical learning algorithm, which is based on ensembles of weighted rulesets. Experiments with the resulting learning algorithm show not only that itis competitive with state-of-the-art rule learners, but also that its errorrate can often be bounded tightly. In fact, the bound turns out to be tighterthan one of the ``best'' bounds for a practical learning scheme known so far(the Set Covering Machine). Finally, we prove that the bound can be furtherimproved by allowing the learner to abstain from uncertain predictions.
Cite
Text
Rückert and Kramer. "Towards Tight Bounds for Rule Learning." International Conference on Machine Learning, 2004. doi:10.1145/1015330.1015387Markdown
[Rückert and Kramer. "Towards Tight Bounds for Rule Learning." International Conference on Machine Learning, 2004.](https://mlanthology.org/icml/2004/ruckert2004icml-tight/) doi:10.1145/1015330.1015387BibTeX
@inproceedings{ruckert2004icml-tight,
title = {{Towards Tight Bounds for Rule Learning}},
author = {Rückert, Ulrich and Kramer, Stefan},
booktitle = {International Conference on Machine Learning},
year = {2004},
doi = {10.1145/1015330.1015387},
url = {https://mlanthology.org/icml/2004/ruckert2004icml-tight/}
}