One-Shot Adaptation of Supervised Deep Convolutional Models
Abstract
Dataset bias remains a significant barrier towards solving real world computer vision tasks. Though deep convolutional networks have proven to be a competitive approach for image classification, a question remains: have these models have solved the dataset bias problem? In general, training or fine-tuning a state-of-the-art deep model on a new domain requires a significant amount of data, which for many applications is simply not available. Transfer of models directly to new domains without adaptation has historically led to poor recognition performance. In this paper, we pose the following question: is a single image dataset, much larger than previously explored for adaptation, comprehensive enough to learn general deep models that may be effectively applied to new image domains? In other words, are deep CNNs trained on large amounts of labeled data as susceptible to dataset bias as previous methods have been shown to be? We show that a generic supervised deep CNN model trained on a large dataset reduces, but does not remove, dataset bias. Furthermore, we propose several methods for adaptation with deep models that are able to operate with little (one example per category) or no labeled domain specific data. Our experiments show that adaptation of deep models on benchmark visual domain adaptation datasets can provide a significant performance boost.
Cite
Text
Hoffman et al. "One-Shot Adaptation of Supervised Deep Convolutional Models." International Conference on Learning Representations, 2014.Markdown
[Hoffman et al. "One-Shot Adaptation of Supervised Deep Convolutional Models." International Conference on Learning Representations, 2014.](https://mlanthology.org/iclr/2014/hoffman2014iclr-one/)BibTeX
@inproceedings{hoffman2014iclr-one,
title = {{One-Shot Adaptation of Supervised Deep Convolutional Models}},
author = {Hoffman, Judy and Tzeng, Eric and Donahue, Jeff and Jia, Yangqing and Saenko, Kate and Darrell, Trevor},
booktitle = {International Conference on Learning Representations},
year = {2014},
url = {https://mlanthology.org/iclr/2014/hoffman2014iclr-one/}
}