Generalizing Labeled and Unlabeled Sample Compression to Multi-Label Concept Classes
Abstract
This paper studies sample compression of maximum multi-label concept classes for various notions of VC-dimension. It formulates a sufficient condition for a notion of VC-dimension to yield labeled compression schemes for maximum classes of dimension d in which the compression sets have size at most d . The same condition also yields a so-called tight sample compression scheme, which we define to generalize Kuzmin and Warmuth’s unlabeled binary scheme to the multi-label case. The well-known Graph dimension satisfies our sufficient condition, while neither Pollard’s pseudo-dimension nor the Natarajan dimension does.
Cite
Text
Samei et al. "Generalizing Labeled and Unlabeled Sample Compression to Multi-Label Concept Classes." International Conference on Algorithmic Learning Theory, 2014. doi:10.1007/978-3-319-11662-4_20Markdown
[Samei et al. "Generalizing Labeled and Unlabeled Sample Compression to Multi-Label Concept Classes." International Conference on Algorithmic Learning Theory, 2014.](https://mlanthology.org/alt/2014/samei2014alt-generalizing/) doi:10.1007/978-3-319-11662-4_20BibTeX
@inproceedings{samei2014alt-generalizing,
title = {{Generalizing Labeled and Unlabeled Sample Compression to Multi-Label Concept Classes}},
author = {Samei, Rahim and Yang, Boting and Zilles, Sandra},
booktitle = {International Conference on Algorithmic Learning Theory},
year = {2014},
pages = {275-290},
doi = {10.1007/978-3-319-11662-4_20},
url = {https://mlanthology.org/alt/2014/samei2014alt-generalizing/}
}