On Version Space Compression

Abstract

We study compressing labeled data samples so as to maintain version space information. While classic compression schemes [ 11 ] only ask for recovery of a samples’ labels, many applications, such as distributed learning, require compact representations of more diverse information which is contained in a given data sample. In this work, we propose and analyze various frameworks for compression schemes designed to allow for recovery of version spaces. We consider exact versus approximate recovery as well as compression to subsamples versus compression to subsets of the version space. For all frameworks, we provide some positive examples and sufficient conditions for compressibility while also pointing out limitations by formally establishing impossibility of compression for certain classes.

Cite

Text

Ben-David and Urner. "On Version Space Compression." International Conference on Algorithmic Learning Theory, 2016. doi:10.1007/978-3-319-46379-7_4

Markdown

[Ben-David and Urner. "On Version Space Compression." International Conference on Algorithmic Learning Theory, 2016.](https://mlanthology.org/alt/2016/bendavid2016alt-version/) doi:10.1007/978-3-319-46379-7_4

BibTeX

@inproceedings{bendavid2016alt-version,
  title     = {{On Version Space Compression}},
  author    = {Ben-David, Shai and Urner, Ruth},
  booktitle = {International Conference on Algorithmic Learning Theory},
  year      = {2016},
  pages     = {50-64},
  doi       = {10.1007/978-3-319-46379-7_4},
  url       = {https://mlanthology.org/alt/2016/bendavid2016alt-version/}
}