New Results for Random Walk Learning
Abstract
In a very strong positive result for passive learning algorithms, Bshouty et al. showed that DNF expressions are efficiently learnable in the uniform random walk model. It is natural to ask whether the more expressive class of thresholds of parities (TOP) is similarly learnable, but the Bshouty et al. time bound becomes exponential in this case. We present a new approach to weak parity learning that leads to quasi-efficient random walk learnability of TOP. We also introduce a more general random walk model naturally related to the Metropolis- Hastings algorithm and show that DNF is efficiently learnable and that juntas are efficiently agnostically learnable in this model.
Cite
Text
Jackson and Wimmer. "New Results for Random Walk Learning." Annual Conference on Computational Learning Theory, 2009. doi:10.5555/2627435.2750361Markdown
[Jackson and Wimmer. "New Results for Random Walk Learning." Annual Conference on Computational Learning Theory, 2009.](https://mlanthology.org/colt/2009/jackson2009colt-new/) doi:10.5555/2627435.2750361BibTeX
@inproceedings{jackson2009colt-new,
title = {{New Results for Random Walk Learning}},
author = {Jackson, Jeffrey C. and Wimmer, Karl},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2009},
doi = {10.5555/2627435.2750361},
url = {https://mlanthology.org/colt/2009/jackson2009colt-new/}
}