On the Sample Complexity of Learning Smooth Cuts on a Manifold
Abstract
Modern data sets, though typically high dimensional, are often generated by processes possessing few essential degrees of freedom, as is the case with human speech. In recent years, such considerations have lead to the notion that high dimensional data may be modeled to lie on a submanifold of low intrinsic dimension. We derive bounds on the number of random samples needed before it is possible to approximately separate data into two classes using smooth decision boundaries with high probability.
Cite
Text
Narayanan and Niyogi. "On the Sample Complexity of Learning Smooth Cuts on a Manifold." Annual Conference on Computational Learning Theory, 2009.Markdown
[Narayanan and Niyogi. "On the Sample Complexity of Learning Smooth Cuts on a Manifold." Annual Conference on Computational Learning Theory, 2009.](https://mlanthology.org/colt/2009/narayanan2009colt-sample/)BibTeX
@inproceedings{narayanan2009colt-sample,
title = {{On the Sample Complexity of Learning Smooth Cuts on a Manifold}},
author = {Narayanan, Hariharan and Niyogi, Partha},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2009},
url = {https://mlanthology.org/colt/2009/narayanan2009colt-sample/}
}