DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition
Abstract
We evaluate whether features extracted from the activation of a deep convolutional network trained in a fully supervised fashion on a large, fixed set of object recognition tasks can be re-purposed to novel generic tasks. Our generic tasks may differ significantly from the originally trained tasks and there may be insufficient labeled or unlabeled data to conventionally train or adapt a deep architecture to the new tasks. We investigate and visualize the semantic clustering of deep convolutional features with respect to a variety of such tasks, including scene recognition, domain adaptation, and fine-grained recognition challenges. We compare the efficacy of relying on various network levels to define a fixed feature, and report novel results that significantly outperform the state-of-the-art on several important vision challenges. We are releasing DeCAF, an open-source implementation of these deep convolutional activation features, along with all associated network parameters to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms.
Cite
Text
Donahue et al. "DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition." International Conference on Machine Learning, 2014.Markdown
[Donahue et al. "DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition." International Conference on Machine Learning, 2014.](https://mlanthology.org/icml/2014/donahue2014icml-decaf/)BibTeX
@inproceedings{donahue2014icml-decaf,
title = {{DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition}},
author = {Donahue, Jeff and Jia, Yangqing and Vinyals, Oriol and Hoffman, Judy and Zhang, Ning and Tzeng, Eric and Darrell, Trevor},
booktitle = {International Conference on Machine Learning},
year = {2014},
pages = {647-655},
volume = {32},
url = {https://mlanthology.org/icml/2014/donahue2014icml-decaf/}
}