Domain Transfer SVM for Video Concept Detection
Abstract
Cross-domain learning methods have shown promising results by leveraging labeled patterns from auxiliary domains to learn a robust classifier for target domain, which has a limited number of labeled samples. To cope with the tremendous change of feature distribution between different domains in video concept detection, we propose a new cross-domain kernel learning method. Our method, referred to as Domain Transfer SVM (DTSVM), simultaneously learns a kernel function and a robust SVM classifier by minimizing both the structural risk functional of SVM and the distribution mismatch of labeled and unlabeled samples between the auxiliary and target domains. Comprehensive experiments on the challenging TRECVID corpus demonstrate that DTSVM outperforms existing cross-domain learning and multiple kernel learning methods.
Cite
Text
Duan et al. "Domain Transfer SVM for Video Concept Detection." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2009. doi:10.1109/CVPR.2009.5206747Markdown
[Duan et al. "Domain Transfer SVM for Video Concept Detection." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2009.](https://mlanthology.org/cvpr/2009/duan2009cvpr-domain/) doi:10.1109/CVPR.2009.5206747BibTeX
@inproceedings{duan2009cvpr-domain,
title = {{Domain Transfer SVM for Video Concept Detection}},
author = {Duan, Lixin and Tsang, Ivor Wai-Hung and Xu, Dong and Maybank, Stephen J.},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2009},
pages = {1375-1381},
doi = {10.1109/CVPR.2009.5206747},
url = {https://mlanthology.org/cvpr/2009/duan2009cvpr-domain/}
}