A Theory of Transfer Learning with Applications to Active Learning
Abstract
We explore a transfer learning setting, in which a finite sequence of target concepts are sampled independently with an unknown distribution from a known family. We study the total number of labeled examples required to learn all targets to an arbitrary specified expected accuracy, focusing on the asymptotics in the number of tasks and the desired accuracy. Our primary interest is formally understanding the fundamental benefits of transfer learning, compared to learning each target independently from the others. Our approach to the transfer problem is general, in the sense that it can be used with a variety of learning protocols. As a particularly interesting application, we study in detail the benefits of transfer for self-verifying active learning; in this setting, we find that the number of labeled examples required for learning with transfer is often significantly smaller than that required for learning each target independently.
Cite
Text
Yang et al. "A Theory of Transfer Learning with Applications to Active Learning." Machine Learning, 2013. doi:10.1007/S10994-012-5310-YMarkdown
[Yang et al. "A Theory of Transfer Learning with Applications to Active Learning." Machine Learning, 2013.](https://mlanthology.org/mlj/2013/yang2013mlj-theory/) doi:10.1007/S10994-012-5310-YBibTeX
@article{yang2013mlj-theory,
title = {{A Theory of Transfer Learning with Applications to Active Learning}},
author = {Yang, Liu and Hanneke, Steve and Carbonell, Jaime G.},
journal = {Machine Learning},
year = {2013},
pages = {161-189},
doi = {10.1007/S10994-012-5310-Y},
volume = {90},
url = {https://mlanthology.org/mlj/2013/yang2013mlj-theory/}
}