Learning by Transferring from Unsupervised Universal Sources

Abstract

Category classifiers trained from a large corpus of annotated data are widely accepted as the sources for (hypothesis) transfer learning. Sources generated in this way are tied to a particular set of categories, limiting their transferability across a wide spectrum of target categories. In this paper, we address this largely-overlooked yet fundamental source problem by both introducing a systematic scheme for generating universal source hypotheses and proposing a principled, scalable approach to automatically tuning the transfer process. Our approach is based on the insights that expressive source hypotheses could be generated without any supervision and that a sparse combination of such hypotheses facilitates recognition of novel categories from few samples. We demonstrate improvements over the state-of-the-art on object and scene classification in the small sample size regime.

Cite

Text

Wang and Hebert. "Learning by Transferring from Unsupervised Universal Sources." AAAI Conference on Artificial Intelligence, 2016. doi:10.1609/AAAI.V30I1.10318

Markdown

[Wang and Hebert. "Learning by Transferring from Unsupervised Universal Sources." AAAI Conference on Artificial Intelligence, 2016.](https://mlanthology.org/aaai/2016/wang2016aaai-learning/) doi:10.1609/AAAI.V30I1.10318

BibTeX

@inproceedings{wang2016aaai-learning,
  title     = {{Learning by Transferring from Unsupervised Universal Sources}},
  author    = {Wang, Yu-Xiong and Hebert, Martial},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2016},
  pages     = {2187-2193},
  doi       = {10.1609/AAAI.V30I1.10318},
  url       = {https://mlanthology.org/aaai/2016/wang2016aaai-learning/}
}