Estimating Relatedness via Data Compression
Abstract
We show that it is possible to use data compression on independently obtained hypotheses from various tasks to algorithmically provide guarantees that the tasks are sufficiently related to benefit from multitask learning. We give uniform bounds in terms of the empirical average error for the true average error of the n hypotheses provided by deterministic learning algorithms drawing independent samples from a set of n unknown computable task distributions over finite sets.
Cite
Text
Juba. "Estimating Relatedness via Data Compression." International Conference on Machine Learning, 2006. doi:10.1145/1143844.1143900Markdown
[Juba. "Estimating Relatedness via Data Compression." International Conference on Machine Learning, 2006.](https://mlanthology.org/icml/2006/juba2006icml-estimating/) doi:10.1145/1143844.1143900BibTeX
@inproceedings{juba2006icml-estimating,
title = {{Estimating Relatedness via Data Compression}},
author = {Juba, Brendan},
booktitle = {International Conference on Machine Learning},
year = {2006},
pages = {441-448},
doi = {10.1145/1143844.1143900},
url = {https://mlanthology.org/icml/2006/juba2006icml-estimating/}
}