Learning to Create Is as Hard as Learning to Appreciate
Abstract
We explore the relationship between a natural notion of unsupervised learning studied by Kearns et al. (STOC '94), which we call here "learning to create" (LTC), and the standard PAC model of Valiant (CACM '84), which is a form of supervised learning and can be thought of as a formalization of "learning to appreciate". Our main theorem states that "if learning to appreciate is hard, then so is learning to create". That is, we prove that if PAC learning with respect to efficiently samplable input distributions is hard, then solving the LTC problem is also hard. We also investigate ways in which our result are tight.
Cite
Text
Xiao. "Learning to Create Is as Hard as Learning to Appreciate." Annual Conference on Computational Learning Theory, 2010.Markdown
[Xiao. "Learning to Create Is as Hard as Learning to Appreciate." Annual Conference on Computational Learning Theory, 2010.](https://mlanthology.org/colt/2010/xiao2010colt-learning/)BibTeX
@inproceedings{xiao2010colt-learning,
title = {{Learning to Create Is as Hard as Learning to Appreciate}},
author = {Xiao, David},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2010},
pages = {516-528},
url = {https://mlanthology.org/colt/2010/xiao2010colt-learning/}
}