Compressed Learning with Regular Concept
Abstract
We revisit compressed learning in the PAC learning framework. Specifically, we derive error bounds for learning halfspace concepts with compressed data. We propose the regularity assumption over a pair of concept and data distribution to greatly generalize former assumptions. For a regular concept we define a robust factor to characterize the margin distribution and show that such a factor tightly controls the generalization error of a learned classifier. Moreover, we extend our analysis to the more general linearly non-separable case. Empirical results on both toy and real world data validate our analysis.
Cite
Text
Lv et al. "Compressed Learning with Regular Concept." International Conference on Algorithmic Learning Theory, 2010. doi:10.1007/978-3-642-16108-7_16Markdown
[Lv et al. "Compressed Learning with Regular Concept." International Conference on Algorithmic Learning Theory, 2010.](https://mlanthology.org/alt/2010/lv2010alt-compressed/) doi:10.1007/978-3-642-16108-7_16BibTeX
@inproceedings{lv2010alt-compressed,
title = {{Compressed Learning with Regular Concept}},
author = {Lv, Jiawei and Zhang, Jianwen and Wang, Fei and Wang, Zheng and Zhang, Changshui},
booktitle = {International Conference on Algorithmic Learning Theory},
year = {2010},
pages = {163-178},
doi = {10.1007/978-3-642-16108-7_16},
url = {https://mlanthology.org/alt/2010/lv2010alt-compressed/}
}