Sample Compression for Multi-Label Concept Classes
Abstract
This paper studies labeled sample compression for multi-label concept classes. For a specific extension of the notion of VC-dimension to multi-label classes, we prove that every maximum multi-label class of dimension d has a sample compression scheme in which every sample is compressed to a subset of size at most d. We further show that every multi-label class of dimension 1 has a sample compression scheme using only sets of size at most 1. As opposed to the binary case, the latter result is not immediately implied by the former, since there are multi-label concept classes of dimension 1 that are not contained in maximum classes of dimension 1.
Cite
Text
Samei et al. "Sample Compression for Multi-Label Concept Classes." Annual Conference on Computational Learning Theory, 2014.Markdown
[Samei et al. "Sample Compression for Multi-Label Concept Classes." Annual Conference on Computational Learning Theory, 2014.](https://mlanthology.org/colt/2014/samei2014colt-sample/)BibTeX
@inproceedings{samei2014colt-sample,
title = {{Sample Compression for Multi-Label Concept Classes}},
author = {Samei, Rahim and Semukhin, Pavel and Yang, Boting and Zilles, Sandra},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2014},
pages = {371-393},
url = {https://mlanthology.org/colt/2014/samei2014colt-sample/}
}