Separating Models of Learning from Correlated and Uncorrelated Data
Abstract
We consider a natural framework of learning from correlated data, in which successive examples used for learning are generated according to a random walk over the space of possible examples. Previous research has suggested that the Random Walk model is more powerful than comparable standard models of learning from independent examples, by exhibiting learning algorithms in the Random Walk framework that have no known counterparts in the standard model. We give strong evidence that the Random Walk model is indeed more powerful than the standard model, by showing that if any cryptographic one-way function exists (a universally held belief in public key cryptography), then there is a class of functions that can be learned efficiently in the Random Walk setting but not in the standard setting where all examples are independent.
Cite
Text
Elbaz et al. "Separating Models of Learning from Correlated and Uncorrelated Data." Annual Conference on Computational Learning Theory, 2005. doi:10.1007/11503415_43Markdown
[Elbaz et al. "Separating Models of Learning from Correlated and Uncorrelated Data." Annual Conference on Computational Learning Theory, 2005.](https://mlanthology.org/colt/2005/elbaz2005colt-separating/) doi:10.1007/11503415_43BibTeX
@inproceedings{elbaz2005colt-separating,
title = {{Separating Models of Learning from Correlated and Uncorrelated Data}},
author = {Elbaz, Ariel and Lee, Homin K. and Servedio, Rocco A. and Wan, Andrew},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2005},
pages = {637-651},
doi = {10.1007/11503415_43},
url = {https://mlanthology.org/colt/2005/elbaz2005colt-separating/}
}