Refutably Probably Approximately Correct Learning
Abstract
We propose a notion of the refutably PAC learning , which formalizes the refutability of hypothesis spaces in the PAC learning model. Intuitively, the refutably PAC learning for a concept class F requires that the learning algorithm should refute F with high probability if a target concept can not be approximated by any concept in F with respect to the underlying probability distribution. We give a general upper bound of O ((1/ɛ+1/ɛ ∼ ) ln (¦ F ^[ n ]¦/δ)) on the number of examples required for refutably PAC learning of F . Here, ɛ and δ are the standard accuracy and confidence parameters, and ɛ′ is the refutation accuracy. Furthermore we also define the strongly refutably PAC learning by introducing the refutation threshold. We prove a general upper bound of O ((1/ ɛ ^2 + 1/ ɛ′ ^2) ln (¦ F ^[ n ]¦ /δ )) for strongly refutably PAC learning of F . These upper bounds reveal that both the refutably learnability and the strongly refutably learnability are equivalent to the standard learnability within the polynomial size restriction. We also define the polynomialtime refutably learnability of a concept class, and characterize it.
Cite
Text
Matsumoto and Shinohara. "Refutably Probably Approximately Correct Learning." International Conference on Algorithmic Learning Theory, 1994. doi:10.1007/3-540-58520-6_84Markdown
[Matsumoto and Shinohara. "Refutably Probably Approximately Correct Learning." International Conference on Algorithmic Learning Theory, 1994.](https://mlanthology.org/alt/1994/matsumoto1994alt-refutably/) doi:10.1007/3-540-58520-6_84BibTeX
@inproceedings{matsumoto1994alt-refutably,
title = {{Refutably Probably Approximately Correct Learning}},
author = {Matsumoto, Satoshi and Shinohara, Ayumi},
booktitle = {International Conference on Algorithmic Learning Theory},
year = {1994},
pages = {469-483},
doi = {10.1007/3-540-58520-6_84},
url = {https://mlanthology.org/alt/1994/matsumoto1994alt-refutably/}
}